forum_id
stringlengths
9
20
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
82
forum_abstract
stringlengths
1
3.52k
forum_keywords
sequencelengths
1
29
forum_decision
stringclasses
22 values
forum_pdf_url
stringlengths
39
50
forum_url
stringlengths
41
52
venue
stringclasses
46 values
year
stringdate
2013-01-01 00:00:00
2025-01-01 00:00:00
reviews
sequence
04c5uWq9SA
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
[ "Rui Xin", "Niloofar Mireshghallah", "Shuyue Stella Li", "Michael Duan", "Hyunwoo Kim", "Yejin Choi", "Yulia Tsvetkov", "Sewoong Oh", "Pang Wei Koh" ]
The release of sensitive data often relies on synthetic data generation and Personally Identifiable Information~(PII) removal, with an inherent assumption that these techniques ensure privacy. However, the effectiveness of sanitization methods for text datasets has not been thoroughly evaluated. To address this critical gap, we propose the first privacy evaluation framework for the release of sanitized textual datasets. In our framework, a sparse retriever initially links sanitized records with target individuals based on known auxiliary information. Subsequently, semantic matching quantifies the extent of additional information that can be inferred about these individuals from the matched records. We apply our framework to two datasets: MedQA, containing medical records, and WildChat, comprising individual conversations with ChatGPT. Our results demonstrate that seemingly innocuous auxiliary information, such as specific speech patterns, can be used to deduce personal attributes like age or substance use history from the synthesized dataset. We show that private information can persist in sanitized records at a semantic level, even in synthetic data. Our findings highlight that current data sanitization methods create a false sense of privacy by making only surface-level textual manipulations. This underscores the urgent need for more robust protection methods that address semantic-level information leakage.
[ "Privacy", "NLP", "Text", "Reidentification", "Data Release", "Sanitization", "Anonymization" ]
Reject
https://openreview.net/pdf?id=04c5uWq9SA
https://openreview.net/forum?id=04c5uWq9SA
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zwPebtqPQ2", "zw1m0q1Iz0", "zJ8VzZzPqO", "xcLpkAjL4v", "xazX9Bn51D", "tNPfN13jjk", "p49urqJH8J", "ogMx9Em4sI", "gYKv85ujP6", "cqC7J6ZMIT", "cmnNrHziBw", "cXvqaXOHIQ", "bAMR4pZrk9", "aEkJIQBxYX", "WjZtqDrGjC", "V3LkbtJRV3", "SgleWS8KDt", "RBKJio85jd", "OCGMNkRL5F", "MpqqCV1BqE", "MMhHeHR0sr", "L7N6PQ6Pz6", "IiEcFajcXv", "5r42X52CH8" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734735343749, 1732240214025, 1732816375283, 1730641169972, 1730685020269, 1732239619465, 1733136109184, 1732239987591, 1732581984417, 1732239759694, 1733175726089, 1733135188174, 1732241033951, 1732240323682, 1733133937390, 1730696410114, 1732477370197, 1737523931603, 1729587475750, 1732462856959, 1732240112138, 1732241167521, 1732241122872, 1732241055487 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission8779/Area_Chair_TEit" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_Jmiv" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_18vb" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_Jmiv" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_HXAX" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_HXAX" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_oF1E" ], [ "ICLR.cc/2025/Conference/Submission8779/Reviewer_oF1E" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ], [ "ICLR.cc/2025/Conference/Submission8779/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper introduces a privacy evaluation framework for the release of sanitized textual datasets. This framework is based on two steps: (i) linking, where a sparse retriever matches de-identified samples with potential candidate \\\"sanitized\\\" samples, and (ii) matching, which assesses the information gained about the target by comparing the matched record from the linking step with the private data. A key aspect is replacing lexical matching (e.g., matching names or other personal attributes) with semantic matching.\\n\\nThe paper addresses an important problem with practical relevance in many domains. Data linkage and inference have an over two-decade-long history (perhaps longer), and the authors correctly acknowledge the urgency of the topic given the ever-growing volume of data collected and stored across multiple domains. The paper is also well-written and easy to follow.\\n\\nThe paper also has several limitations, which made most reviewers stand by a score that rates the paper slightly below the acceptance threshold. In my view, the main concerns are novelty of the claims and linkage attack (e.g., the use of the BM25 retriever), connection with related work (see comments by reviewer Jmiv), and precision of the experimental evaluation. For the latter, I was surprised that experiments include no std dev/std error in the evaluation, and it is not clear that the finding on the two selected datasets (MedQA and WildChat) would extend generalize to other datasets. This is particularly relevant in light of the confusion surrounding DP results (Jmiv and oF1E).\\n\\nOverall, the paper could benefit from a significant revision and increased precision of the definitions and the experimental results. This would be a clear \\\"major revision\\\" if this were a journal. I believe the importance of the topic demands precise, accurate, and clear numerical results to substantiate the claims of privacy risks. This would make the paper's overall (important) message much stronger and more substantiated. I encourage the authors to review their manuscript and seriously account for the reviewer comments.\", \"additional_comments_on_reviewer_discussion\": \"The reviewers remained tepid after discussion with the authors.\"}", "{\"title\": \"Response to Reviewer Jmiv Part 3\", \"comment\": \"> L147-148: You state your \\\"approach aggregates the auxiliary information into a single text chunk\\\" -- Does that mean you combine (concatenate?) all \\\"atomized claims\\\" x^(i)_j across all j into the \\\"aux info\\\" {\\\\tilde x}^(i)? (Wouldn't hurt to write this down more explicitly.) (Just found the info in L296/Section 3.3 that the attacker gets 3 random claims for the linkage phase. I find that a bit late, it would be better to mention it directly in Section 2.3 to avoid confusion/guessing on the side of the reader.)\\n\\nYes this is exactly what we meant. We\\u2019ll update it in the draft in the next version. \\n\\n> L156: Can you give a specific reason for querying only with claims that were not utilized in the linking phase? \\n\\nOur metric seeks to measure the information gained from having access to released sanitized data. Therefore, when computing the final score, we ignore the auxiliary information. \\n\\n> Besides, how do I know whether a claim about an original document was used for linking? \\n\\nWe randomly select three claims from a given record in this study.\\n\\n> If all atomized claims are combined into the aux info (cf. previous question), and the aux info is used as query in the retriever, wouldn't this imply that all claims are already consumed in the linking phase?\\n\\nFor records containing fewer than three claims, we exclude them from the final privacy metric computation to maintain consistent evaluation conditions across the dataset.\\n\\n> L159: If I understood correctly, you define the \\\"similarity metric\\\" \\u00b5 between the original and linked documents by querying a language model to judge the document similarity, where you assign values on a scale from 1 (for 'identical documents') to 3. I wonder if it would make more sense to start the scale at 0, since mathematically, a metric has the property of evaluating to 0 for identical inputs. (In your case, we would get \\\"\\u00b5(x,x) > 0\\\" instead of \\\"\\u00b5(x,x) = 0\\\".)\\n\\nYes, we normalize the score to 0-1 when reporting the numbers in the table. We\\u2019ll add it to the paper.\\n\\n> How do I know that the atomized claims, which are used to compute \\u00b5 and hence to measure privacy preservation, are actually privacy-relevant, and not just some arbitrary, privacy-insensitive facts?\\n\\nAnswered above. \\n\\n> L313: I'm confused regarding the symbol \\\"\\u00b5\\\" seemingly used for multiple purposes. It is defined in Sec. 2.4, but here, you also use it for another metric induced by ROGUE-L scores.\\n\\nWe apologize for the ambiguity regarding \\u00b5. In this baseline, we use ROUGE-L as both the linking function L and the privacy metric \\u00b5 to investigate privacy score using established text similarity metrics. This choice simulates a classical baseline approach where the same algorithmic method serves both purposes. We have revised the notation to distinguish between different implementations of the steps in the paper. \\n\\n> L317 (also L386): You state \\\"zero-shot prompting achieves an accuracy of 0.44\\\" for MedQA task utility, but why am I unable to find that result in Table 1 (for \\\"No Sanitization\\\", it says 0.69)?\\n\\nWe apologize for the unclear terminology. The 0.44 accuracy refers to our baseline measurement where the model receives only the question and multiple choice options, without any context. This represents the model's inherent knowledge. In contrast, the 0.69 accuracy under \\\"No Sanitization\\\" represents our upper bound, where the model receives complete, unmodified context. We have updated the manuscript to reflect this. \\n\\n> Calling the privacy metrics \\\"Overlap\\\" and \\\"Similarity\\\" is very confusing, since they actually mean the opposite (high lexical overlap and semantic similarity would indicate a high agreement between the two documents, but high scores in Table 1 mean good privacy). Name them lexical/semantic \\\"distance\\\" instead?\\n\\nThank you. We have revised the terminology accordingly.\"}", "{\"comment\": \"Thank you for providing the additional explanations. Most make sense, and I assume that you will add the additional explanations to the paper where they are required/helpful for readers to better/quicker follow the paper.\\n\\nFor Table 2, I agree that large $\\\\epsilon$ values can still provide good protection in DP-SGD.\\nHowever, wouldn't it make sense to focus your evaluation on a range of smaller values, say, $\\\\epsilon\\\\in[0.5,3]$? Also, while DP improves privacy for MedQA, it seems to be more detrimental for WildChat, where the drop in utility is more significant than the improvement in privacy. (You make a _general_ claim in L382-383 that \\\"implementing DP, even with relaxed guarantees such as \\u03b5 = 1024, significantly enhances privacy protection\\\", however, this only seems to apply to MedQA.)\"}", "{\"summary\": \"This paper introduces a privacy evaluation framework for data sanitization methods, specifically data anonymization and data synthesis, in the context of natural language. The framework can be summarized as follows: first, random records from the original data are sampled as the auxiliary data; then, an information retrieval technique is used to link the auxiliary data with the sanitized data, and an LLM is utilized to evaluate the semantic similarity between the original records and the linked records. The final similarity scores denote the degree of privacy leakage of the sanitized data.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"This paper focuses on the privacy leakage of text data. The authors design different prompts for LLM to evaluate the semantic similarity between two sentences, which is interesting. The experimental results are extensive. Multiple data sanitization techniques are included in the evaluation framework.\", \"weaknesses\": \"1. The main issue of this paper is the definition of privacy leakage, which the authors equate with semantic similarity between auxiliary data and sanitized data. However, the semantic information of a sentence reflects its utility. If the semantic content of a sanitized sentence is altered, the sanitization method would be useless. Traditional data anonymization methods aim to remove only identifiers from a data record rather than all information. In this context, identifiers should be the privacy focus, and privacy leakage should refer specifically to identifier leakage.\\n\\n2. The technical novelty is relatively limited. The linking step uses the existing BM25 retriever, while the semantic similarity evaluation mainly relies on established prompt engineering techniques.\\n\\n3. The findings are not particularly interesting, as it is well-known that simple data anonymization and data synthesis techniques are insufficient to protect data privacy. This paper's findings merely confirm that this limitation also applies to text data.\\n\\n4. The numerical results rely on LLM output, which is relatively qualitative and less persuasive. Additionally, querying LLaMA three times for consistency seems unnecessary; disabling sampling parameters in text generation should ensure consistent results from LLaMA for the same query.\", \"questions\": \"N/A\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a framework to evaluate sanitization methods for releasing datasets with textual data. They highlight that obvious methods such as removing explicit identifiers like names is insufficient for properly protecting privacy, since other semantic details can also leak private information. Also, auxiliary information about an individual may be linkable to a supposedly sanitized target document, thus allowing an attacker to infer or recover further sensitive details.\\n\\nThe goal of the framework is the quantification of information leakage from sanitized documents given auxiliary information about the document's owner or author.\\nThe framework proposes to determine auxiliary information from each original documents by extracting individual \\\"claims\\\". For each document, the attacker is given a subset of claims and runs a sparse retriever to find the best-matching document from the set of sanitized documents.\\nThey then define a similarity metric, which is either determined by an LLM or the ROGUE-L score, to compute the similarity between the retrieved document and the remaining claims extracted from the original document.\\nAdditionally, they define task-specific utility metrics for each evaluated dataset.\\n\\nIn the evaluation, the authors consider two datasets: MedQA from the medical domain with a question-answering task, as well as WildChat consisting of online conversations with ChatGPT and a text categorization task. They also consider a range of sanitization methods that either work by removing PII or by generating synthetic data, the latter also with the option of providing differential privacy.\\nIn each scenario, the newly introduced semantic and lexical privacy metrics are computed, along with task-specific utility measures as well as the quality (coherence) of the sanitized texts. Lastly, they perform a human evaluation to determine which variant of the privacy metric best matches human preferences.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Formalizing linkage attacks for unstructured text data is a nice and useful contribution, and enables a systematic evaluation of various (novel) text sanitization methods in the future.\\n\\nWhile not entirely new, cf. e.g., [1] and the already cited (Stadler et al., 2022), the observations that superficial sanitization methods (such as PII removal) are often insufficient to properly protect privacy remains important.\\n\\nFor most parts, the paper is well written and easy to follow. However, there are some uncertainties about metrics and inconsistencies between numbers reported in the texts and tables, which are (in my view) confusing to the reader and undermine the validity of the currently reported results.\", \"weaknesses\": \"I stumbled across some inconsistencies between the numbers reported in the Tables and discussed in the text. Please double-check (cf. questions) and update, or explain the differences.\\n\\nSome details about the metrics and their computation remain unclear (cf. questions). Please try to use consistent naming and define concepts (such as the definition of metrics/distances) in one concise and consecutive piece of text (not spread across several sections).\", \"l323\": \"I think the conclusion from a \\\"disparity between lexical and semantic similarity\\\" to \\\"these techniques primarily modify and paraphrase text without effectively disrupting the underlying connected features and attributes\\\" is made a bit prematurely: Both are entirely different measures, and even for \\\"no sanitization\\\", the lexical score is twice the semantic score. Also, what would happen if you shifted the (apart from the ordering: somewhat arbitrarily) assigned scores for the \\\"similarity metric\\\" in Section 2.4 from {1,2,3} to {0,1,2} or to {1,10,100}?\", \"questions\": \"L081, L099: Just to be sure: If I understood correctly, \\\"claims\\\" can refer to any sensitive or non-sensitive information in the original texts?\", \"l101\": [\"If (1) PII removal had 94% leakage (inferable claims), and (2) data synthesis (with or without DP?) has 9% lower leakage (i.e., 85% ?), why does (3) data synthesis without DP state 57% << 85% leakage?\", \"Section 2.3 Linking Method:\", \"L146: Could you briefly motivate the use of the sparse BM25 retriever? Did you consider dense retrieval methods (say, using some form of text embeddings)? What are the benefits of BM25 vs. other sparse methods, or of sparse methods vs. dense methods, in particular for the linkage task at hand?\", \"L147-148: You state your \\\"approach aggregates the auxiliary information into a single text chunk\\\" -- Does that mean you combine (concatenate?) _all_ \\\"atomized claims\\\" x^(i)_j across all j into the \\\"aux info\\\" {\\\\tilde x}^(i)? (Wouldn't hurt to write this down more explicitly.) (Just found the info in L296/Section 3.3 that the attacker gets 3 random claims for the linkage phase. I find that a bit late, it would be better to mention it directly in Section 2.3 to avoid confusion/guessing on the side of the reader.)\", \"Section 2.4 Similarity Metric:\", \"L156: Can you give a specific reason for querying only with claims that were _not_ utilized in the linking phase? Besides, how do I know whether a claim about an original document was used for linking? If _all_ atomized claims are combined into the aux info (cf. previous question), and the aux info is used as query in the retriever, wouldn't this imply that all claims are already consumed in the linking phase?\", \"L159: If I understood correctly, you define the \\\"similarity metric\\\" \\u00b5 between the original and linked documents by querying a language model to judge the document similarity, where you assign values on a scale from 1 (for 'identical documents') to 3. I wonder if it would make more sense to start the scale at 0, since mathematically, a metric has the property of evaluating to 0 for identical inputs. (In your case, we would get \\\"\\u00b5(x,x) > 0\\\" instead of \\\"\\u00b5(x,x) = 0\\\".)\", \"How do I know that the atomized claims, which are used to compute \\u00b5 and hence to measure privacy preservation, are actually privacy-relevant, and not just some arbitrary, privacy-*in*sensitive facts?\"], \"l313\": \"I'm confused regarding the symbol \\\"\\u00b5\\\" seemingly used for multiple purposes. It is defined in Sec. 2.4, but here, you also use it for another metric induced by ROGUE-L scores.\\n\\nL317 (also L386): You state \\\"zero-shot prompting achieves an accuracy of 0.44\\\" for MedQA task utility, but why am I unable to find that result in Table 1 (for \\\"No Sanitization\\\", it says 0.69)?\", \"table_1\": [\"Calling the privacy metrics \\\"Overlap\\\" and \\\"Similarity\\\" is very confusing, since they actually mean the opposite (high lexical overlap and semantic similarity would indicate a high agreement between the two documents, but high scores in Table 1 mean good privacy). Name them lexical/semantic \\\"distance\\\" instead?\", \"Talking about metrics: In Equation 1 you define a \\\"privacy metric\\\", I guess that is what is reported under the (why differently named?) \\\"Semantic Similarity\\\" column in Table 1. It is based on the \\\"similarity metric\\\" from Section 2.4, which has values between 1 and 3 -- How does it end up with values between 0 and 1 in Table 1?? I couldn't see any discussion on some form of normalization of these scores. The expected value of scores >= 1 in Eq. 1 would still result in a value >= 1, and not in [0,1). Please double-check how you actually compute the metrics. Try *not* to distribute information pertaining to one concept across the paper, but put it concisely into one place if possible. Also prefer consistent naming.\"], \"table_2\": [\"The effect of \\\\epsilon appears surprisingly small to me, with only minimal changes across all metrics even when comparing \\\\epsilon=3 and 1024. Can you explain this behavior?\", \"It would be interesting to compare with a random baseline where the utility is determined from completely random texts -- to rule out that 0.4x task utility in the case of MedQA can already be achieved based on completely random input (say, if the dataset suffers from strong class imbalance and the classifier always just guesses the largest class, thus obtaining an overly optimistic accuracy).\"], \"table_3\": \"- L404: What exactly is the \\\"linkage rate\\\"? Please specify.\\n- L423: Contradicting statements: Here, you state the last three claims are used, previously in L296, you mentioned 3 randomly selected claims.\\n\\nL443/Section 2.4: If you can switch the similarity metric \\u00b5 also to ROGUE-L, please already state this as possible option in Section 2.4 where you introduce \\u00b5. Currently, you only say there that \\u00b5 is determined by querying a language model.\\n\\nLastly, what are your thoughts on information that is both privacy-sensitive and utility-relevant, say, if one or more atomized claims are also strongly related to a downstream task? For instance, what if an atomized claim turns out to be \\\"John likes baseball\\\", and one of the WildChat categories is \\\"baseball\\\", too? Feel free to substitute \\\"baseball\\\" with something more delicate, such as \\\"drinking alcohol\\\".\\n(Case A: If the baseball aspect is kept in the sanitized document, both \\u00b5 and the chi^2 distance should be small, indicating poor privacy but good utility. Case B: If the baseball aspect was redacted, both \\u00b5 and chi^2 should be larger, indicating better privacy but poorer utility.)\", \"additional_considerations_for_related_work\": [\"[1] also highlights the insufficiencies of superficial sanitization methods for text. [1] and also [3,4,5] propose differentially private methods that obfuscate texts. An evaluation framework for text rewriting has also been introduced previously [6].\", \"[2] has been published in parallel with (Yue et al., 2023) and also suggests differentially private synthetic text generation.\", \"[1] Weggenmann & Kerschbaum, \\\"SynTF: Synthetic and Differentially Private Term Frequency Vectors for Privacy-Preserving Text Mining\\\", SIGIR 2018\", \"[2] Mattern et al., \\\"Differentially Private Language Models for Secure Data Sharing\\\", EMNLP 2022\", \"[3] Weggenmann et al. \\\"DP-VAE: Human-Readable Text Anonymization for Online Reviews with Differentially Private Variational Autoencoders\\\", WWW 2022\", \"[4] Igamberdiev & Habernal, \\\"DP-BART for Privatized Text Rewriting under Local Differential Privacy\\\", ACL Findings 2023\", \"[5] Bo et al., \\\"ER-AE: Differentially Private Text Generation for Authorship Anonymization\\\", NAACL 2019\", \"[6] Igamberdiev et al. \\\"DP-Rewrite: Towards Reproducibility and Transparency in Differentially Private Text Rewriting\\\", COLING 2022\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to All Reviewers\", \"comment\": \"We sincerely thank all reviewers for their thoughtful and constructive feedback. We appreciate the recognition of our paper's strengths, including its systematic evaluation framework, comprehensive experiments, and clear writing. Our work addresses a critical need to better understand the **inherent tension between preserving semantic information for utility while protecting privacy** - a fundamental challenge without simple solutions. Our findings reveal that practitioners who rely on current PII removal and scrubbing methods for text privacy may be operating under a false sense of privacy. This insight is **particularly alarming given the increasing volume of sensitive text data being handled across healthcare, customer service, and other domains** (Mireshghallah et al. 2024). Without proper understanding of these limitations, organizations may inadvertently expose sensitive information while believing their data is adequately protected.\\n\\nWhile our work opens up important new research directions in privacy-preserving text sanitization, we have barely scratched the surface of this complex challenge. We have carefully addressed each reviewer's specific concerns in our detailed responses below and have uploaded a revision with changes highlighted in yellow. For each response we have included the reference information, if it was already not in the paper.\"}", "{\"comment\": \"Thank you again for your detailed response to our rebuttal and helpful feedback throughout the review process. As we approach the discussion period deadline, we remain available to address any additional aspects requiring further clarification. We look forward to engaging with any remaining questions you may have.\"}", "{\"title\": \"Response to Reviewer Jmiv Part 1\", \"comment\": \"Thank you for your thoughtful feedback. We appreciate your highlighting our strengths, including the critical contribution of a systematic evaluation of novel sanitization methods and that our writing is easy to follow. We hope the response below addresses your concerns and questions.\\n \\n\\n> Lastly, what are your thoughts on information that is both privacy-sensitive and utility-relevant?\\n\\n> How do I know that the atomized claims, which are used to compute \\u00b5 and hence to measure privacy preservation, are actually privacy-relevant, and not just some arbitrary, privacy-insensitive facts?\\n\\nWe are making an important first step towards making privacy metrics and definitions more relevant and practical. Existing privacy metrics, such as differential privacy, address *data* privacy, where a single token in the textual data is considered private. While this approach works well for structured data (Stadler et al., 2022), text data presents unique challenges that current methods fail to address. As a result, major cloud providers and healthcare organizations continue to rely on simple PII detection and de-identification (Johnson et al., 2020), following outdated privacy models (Garcia et al., 2019).\\n\\nWe take a first step towards addressing this gap by proposing, for the first time, an *inferential* privacy metric. Our results quantify the failure of existing approaches\\u2013showing 94% information leakage with *state-of-the-art PII removal methods*--and provide compelling evidence that current approaches are fundamentally inadequate.\\n\\nIdeally, one would want *contextual* privacy metric, which can take into account (i) which information is more privacy-relevant and (ii) which information is private in the context that the textual information is being shared. These are extremely challenging questions that we believe are beyond the scope of this paper. Nevertheless, they represent exciting research directions to pursue, particularly given recent advances in LLMs. We have added this discussion to the limitation section.\\n\\n\\nReferences\\n\\nJohnson, A. E., Bulgarelli, L., & Pollard, T. J. (2020, April). Deidentification of free-text medical records using pre-trained bidirectional transformers. In Proceedings of the ACM Conference on Health, Inference, and Learning (pp. 214-221).\\n\\nGarcia, D. (2019). Privacy beyond the individual. Nature human behaviour, 3(2), 112-113.\\n\\nStadler, Theresa, Bristena Oprisanu, and Carmela Troncoso. \\u201cSynthetic Data -- Anonymisation Groundhog Day.\\u201d arXiv, January 24, 2022. http://arxiv.org/abs/2011.07018.\\n\\n\\n> L323: I think the conclusion from a \\\"disparity between lexical and semantic similarity\\\" to \\\"these techniques primarily modify and paraphrase text without effectively disrupting the underlying connected features and attributes\\\" is made a bit prematurely: Both are entirely different measures, and even for \\\"no sanitization\\\", the lexical score is twice the semantic score. \\n\\nWe agree with the reviewer that direct numerical comparison between lexical and semantic scores may not be methodologically ideal, but we focus on how users interpret these privacy metrics. In practice, users often interpret these numbers as direct indicators of privacy protection levels. By showing both metrics, we provide a more complete picture that helps users avoid over-relying on any single measure when assessing privacy guarantees, which can later be used on privacy nutrition labels designed to help practitioners (Smart et al., 2024). This dual approach promotes a more nuanced understanding of actual privacy protection rather than depending on potentially misleading single metrics (Kelley et al., 2009).\\n\\nReferences\\n\\nSmart, M. A., Nanayakkara, P., Cummings, R., Kaptchuk, G., & Redmiles, E. (2024). Models matter: Setting accurate privacy expectations for local and central differential privacy. arXiv preprint arXiv:2408.08475.\\n\\nKelley, P. G., Bresee, J., Cranor, L. F., & Reeder, R. W. (2009, July). A\\\" nutrition label\\\" for privacy. In Proceedings of the 5th Symposium on Usable Privacy and Security (pp. 1-12)\"}", "{\"title\": \"Response to Reviewer oF1E\", \"comment\": \"> Thanks for your rebuttal to clarify a few disagreements. Except I still think that the auxiliary data is highly contrived, I agree with the other comment. So, I will raise my review score.\\n\\nThank you for your feedback and score revision. To address the concerns about auxiliary data, we conducted an additional experiment where we used an LM (LLaMa 3 8B; prompt in Appendix D.2.2) to paraphrase the auxiliary information to reduce direct textual overlap with the original text. For example, the auxiliary information\\n\\n> \\\"Auscultation of the lungs does not reveal any significant abnormalities. He consumed 3 glasses of the drink before symptoms developed. On physical examination, he is disoriented.\\\"\\n\\nis paraphrased into \\n\\n> \\\"A thorough examination of the patient's lungs did not uncover any notable issues. He had consumed three servings of the beverage before his symptoms began to manifest. Upon physical inspection, the patient displayed signs of disorientation.\\\"\\n\\nOverall, bi-gram overlap (as measured by ROUGE-2 precision) between the paraphrased and original auxiliary information decreases from 71.0% to 19.9% for MedQA and from 40.5% to 21.0% for WildChat.\", \"we_repeated_our_privacy_analysis_using_the_new_paraphrased_auxiliary_information_and_found_that\": \"1. The relative performance patterns across sanitization methods remain consistent whether using original or paraphrased auxiliary data\\u2014methods showing higher leakage with original auxiliary data also show higher leakage with paraphrased data. For example, the relative distances between AzureAI PII tool < Dou et al. (2024) is preserved when we switch to paraphrased auxiliary data. \\n2. Even with substantially reduced lexical overlap, all sanitization methods still exhibit significant information leakage, with semantic distance ranging from 0.22 to 0.57 when using paraphrased auxiliary data. A semantic distance of 0.57 means roughly that 43% of the information is leaked (assuming no partial information leakage). As you pointed out, BM25 is particularly sensitive to paraphrasing, so we expect we would be able to recover even more information using a semantic (dense) retriever. \\n\\nThese results demonstrate that existing sanitization approaches fail to prevent information leakage, even when evaluated under conditions of reduced textual overlap. We have added this analysis to the revision in Appendix B.1. Thank you for raising this question.\\n\\n| Dataset | Sanitization Method | Semantic Distance | Semantic Distance with Paraphrased Aux Info |\\n|---------|-------------------|------------------|------------------------------------------|\\n| **MedQA** | No Sanitization | 0.04 | 0.22 |\\n| | Sanitize & Paraphrase | 0.31 | 0.35 |\\n| | Azure AI PII tool | 0.06 | 0.26 |\\n| | Dou et al. (2023) | 0.34 | 0.50 |\\n| | Staab et al. (2024) | 0.33 | 0.57 |\\n| **WildChat** | No Sanitization | 0.19 | 0.26 |\\n| | Sanitize & Paraphrase | 0.44 | 0.50 |\\n| | Azure AI PII tool | 0.21 | 0.30 |\\n| | Dou et al. (2023) | 0.22 | 0.28 |\\n| | Staab et al. (2024) | 0.40 | 0.47 |\\n\\nTable 1. Privacy scores measured using original vs. paraphrased auxiliary information across sanitization methods.\"}", "{\"title\": \"Response to Reviewer HXAX\", \"comment\": \"> \\u201cDid you find major differences between the two datasets in terms of atomizing claims in documents? It seems to me that this would be more structured in a medical Q&A dataset, as compared to LLM-user interactions.\\u201d\\n\\nThank you for this insightful question about dataset differences. We'll address this by examining three key aspects: (1) the structural patterns we found in each dataset type, (2) how these differences affected sanitization effectiveness, and (3) the practical implications for privacy protection.\\n\\n**(1) Structural Differences in Claims**: In MedQA, we found highly structured patterns with consistent medical attributes - 89% of records contained patient age, 81% included specific symptoms, and 63% contained medical history information, with an average of 15.6 distinct medical claims per document. This structured nature made the atomization process more systematic - we could reliably separate claims about symptoms, medical history, and demographics. However, this revealed a key privacy challenge: even after sanitization, the semantic relationships between medical attributes remained intact, making re-identification possible through these linked attributes. This was particularly problematic due to the sparsity of specific age-symptom-history combinations in medical data - **unique combinations of these attributes could often identify a single patient even when individually sanitized.**\\n\\n**(2) Dataset-Specific Sanitization Effectiveness**: The structural differences led to interesting patterns in sanitization effectiveness. For MedQA, while DP-based synthesis achieved strong privacy scores (0.92), it showed significant utility degradation (-22%) on medical reasoning tasks compared to non-dp data synthesis method, leaving the utility lower than the model\\u2019s internal knowledge. This sharp utility drop occurred because medical reasoning requires precise preservation of sparse, specialized attribute combinations - even small perturbations in the relationships between symptoms, age, and medical history can change the diagnostic implications. Identifier removal performed poorly (privacy score 0.34) as it couldn't break these revealing semantic connections between medical attributes.\\n\\nIn contrast, WildChat showed more promising results with DP-based synthesis, maintaining better utility (only -12% degradation from non-dp to an epsilon of 64). This better privacy-utility balance stems from two key characteristics of conversational data: First, the information density is lower - unlike medical records where each attribute combination is potentially crucial, conversations contain redundant information and natural paraphrasing. Second, the success criteria for conversations are more flexible - small variations in phrasing or exact details often don't impact the core meaning or usefulness of the exchange. This made the dataset more robust to the noise introduced by DP-based synthesis while still maintaining meaningful content.\\n\\n**(3) Practical Guidelines for Sanitization**: Our findings challenge the common practice of relying on PII removal and scrubbing methods for text privacy, showing they provide a false sense of security. These insights are particularly timely as organizations increasingly handle sensitive text data across healthcare, customer service, and other domains. We thank you again for your thoughtful feedback and will incorporate the above discussion in the paper.\"}", "{\"comment\": \"Thank you for your feedback. We have incorporated these insights into the Discussion section of the revised manuscript.\"}", "{\"comment\": \"Thank you again for your review of our submission. As we approach the discussion closure deadline, we remain available to address any aspects requiring further clarification. We look forward to engaging with any additional questions you may have.\"}", "{\"title\": \"Response to Reviewer 18vb Part 1\", \"comment\": \"We appreciate your detailed review and constructive feedback. We thank you for highlighting that we have an extensive set of experiments as well as our methodology. We hope the following response addresses your concerns.\\n\\n\\n> \\u201cidentifiers should be the privacy focus, and privacy leakage should refer specifically to identifier leakage.\\u201d\\n\\nWhile identifier leakage is a necessary component for measuring privacy leakage, measuring it alone is not sufficient to ensure privacy. Modern privacy threats increasingly leverage semantic patterns and quasi-identifiers (Ganta et al., 2008). Moreover, real-world privacy breaches like the Netflix Prize de-anonymization (Narayanan & Shmatikov, 2008) demonstrate how de-identified information, which has no identifier leakage, enables the breach of privacy. It is therefore important to go beyond identifier leakage for a proper measurement of privacy. \\n\\nFurthermore, we agree with the reviewer that the semantic information is heavily tied to the utility of the record; however, there is a long-standing tradeoff between\\nprivacy and utility, which is complicated by the fact that privacy is inherently context-dependent (Nissenbaum, 2004, Shao et al., 2024). Our work does not attempt to make normative judgments about what constitutes a privacy violation - rather, we provide a quantitative framework for measuring information persistence after sanitization. We aim to help disentangle the complex relationship between privacy and utility by providing a framework to measure and better understand these trade-offs. Our broader view of privacy is especially critical given the unprecedented scale and intimacy of user-LLM interactions.\\n\\n\\nReferences\\n\\nNissenbaum, H. (2004). Privacy as contextual integrity. Wash. L. Rev., 79, 119.\\n\\nShao, Y., Li, T., Shi, W., Liu, Y., & Yang, D. (2024) PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track.\\n\\nNarayanan, Arvind, and Vitaly Shmatikov. \\\"Robust de-anonymization of large sparse datasets.\\\" In 2008 IEEE Symposium on Security and Privacy (sp 2008), pp. 111-125. IEEE, 2008.\\n\\n\\n> \\u201cThe technical novelty is relatively limited \\u2026\\u201d\\n\\n> \\u201cThe findings are not particularly interesting\\u2026\\u201d\", \"our_work_addresses_a_critical_gap\": \"while privacy limitations are documented for structured data (Stadler et al., 2022), text data presents unique challenges that current methods fail to address. Major cloud providers and healthcare organizations continue to rely on simple PII detection and de-identification (Johnson et al., 2020), following outdated privacy models (Garcia et al., 2019). Our results quantify this failure - showing 94% information leakage with *state-of-the-art PII removal methods* - and provide compelling evidence that current approaches are fundamentally inadequate. We argue that our findings of textual sanitization methods falsely preserve privacy uncovers a fundamental issue beyond *merely* applying it to text data.\\n\\nThe urgency of this work is amplified by the scale of personal data sharing with LLMs (over 200M monthly ChatGPT users) and users' demonstrated tendency to share more intimate details with AI systems than human interlocutors (Zhang et al., 2024). This combination of increased disclosure and inadequate protection mechanisms creates significant privacy vulnerabilities that practitioners can no longer ignore.\\n\\nReferences\\n\\nJohnson, A. E., Bulgarelli, L., & Pollard, T. J. (2020, April). Deidentification of free-text medical records using pre-trained bidirectional transformers. In Proceedings of the ACM Conference on Health, Inference, and Learning (pp. 214-221).\\n\\nGarcia, D. (2019). Privacy beyond the individual. Nature human behaviour, 3(2), 112-113.\\n\\nZhang, Z., Jia, M., Lee, H. P., Yao, B., Das, S., Lerner, A., ... & Li, T. (2024, May). \\u201cIt's a Fair Game\\u201d, or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-26).\\n\\nStadler, Theresa, Bristena Oprisanu, and Carmela Troncoso. \\u201cSynthetic Data -- Anonymisation Groundhog Day.\\u201d arXiv, January 24, 2022. http://arxiv.org/abs/2011.07018.\"}", "{\"title\": \"Response to Reviewer Jmiv Part 4\", \"comment\": \"> Talking about metrics: In Equation 1 you define a \\\"privacy metric\\\", I guess that is what is reported under the (why differently named?) \\\"Semantic Similarity\\\" column in Table 1. It is based on the \\\"similarity metric\\\" from Section 2.4, which has values between 1 and 3 -- How does it end up with values between 0 and 1 in Table 1?? I couldn't see any discussion on some form of normalization of these scores. The expected value of scores >= 1 in Eq. 1 would still result in a value >= 1, and not in [0,1). Please double-check how you actually compute the metrics. Try not to distribute information pertaining to one concept across the paper, but put it concisely into one place if possible. Also prefer consistent naming.\\n\\nThank you for identifying this inconsistency. The semantic similarity metric indeed originates from a 1-3 scale. We normalize these scores to the [0,1] range. We have consolidated all metric-related information, including this normalization step, in Section 2 to ensure clarity and completeness. Additionally, we have standardized the terminology throughout the paper to consistently refer to these metrics using the same names in both the methodology section and results discussion.\\n\\n> The effect of \\\\epsilon appears surprisingly small to me, with only minimal changes across all metrics even when comparing \\\\epsilon=3 and 1024. Can you explain this behavior?\\n\\nExplained above. \\n> It would be interesting to compare with a random baseline where the utility is determined from completely random texts -- to rule out that 0.4x task utility in the case of MedQA can already be achieved based on completely random input (say, if the dataset suffers from strong class imbalance and the classifier always just guesses the largest class, thus obtaining an overly optimistic accuracy).\\n\\n\\nThank you for this suggestion. We have a related baseline that measures the language model's inherent knowledge bias. Instead of using random text input, we evaluate the model's performance when given only the question without any context or private information, achieving 0.44 accuracy. \\n> L404: What exactly is the \\\"linkage rate\\\"? Please specify.\", \"there_are_two_stages_in_our_pipeline\": \"the linking stage using the linking method L, and the stage where we apply the similarity metric \\u03bc. The linkage rate measures the percentage of documents correctly matched with their corresponding auxiliary information using linking method L. For this metric, we only report sanitization methods that preserve a correspondence between original and sanitized documents.\\n\\n> L423: Contradicting statements: Here, you state the last three claims are used, previously in L296, you mentioned 3 randomly selected claims.\\n\\nWe use randomly selected claims in our experiments. We have fixed this inconsistency in the manuscript.\\n\\n> L443/Section 2.4: If you can switch the similarity metric \\u00b5 also to ROGUE-L, please already state this as possible option in Section 2.4 where you introduce \\u00b5. Currently, you only say there that \\u00b5 is determined by querying a language model.\\n\\nThank you. We have updated our paper to distinguish between the language model-based metric and the ROUGE-L based privacy metric.\\n\\n> Lastly, what are your thoughts on information that is both privacy-sensitive and utility-relevant, say, if one or more atomized claims are also strongly related to a downstream task? For instance, what if an atomized claim turns out to be \\\"John likes baseball\\\", and one of the WildChat categories is \\\"baseball\\\", too? Feel free to substitute \\\"baseball\\\" with something more delicate, such as \\\"drinking alcohol\\\". (Case A: If the baseball aspect is kept in the sanitized document, both \\u00b5 and the chi^2 distance should be small, indicating poor privacy but good utility. Case B: If the baseball aspect was redacted, both \\u00b5 and chi^2 should be larger, indicating better privacy but poorer utility.)\\n\\nAnswered above.\\n\\n> Additional considerations for related work\\n\\nThank you! We have added them in the paper.\"}", "{\"comment\": \"> Focus evaluation on a range of smaller epsilons\\n\\nThank you for the suggestions! We\\u2019ll look into adding more smaller epsilon comparisons. Here are our reasons for selecting the existing set of epsilon values:\\n1. In our experiments, we observe when \\\\epsilon is 3, the model output is private, but the utility is quite low. In particular, the text produced is incoherent (please refer to results in Table 2). We therefore opted not to try lower values of \\\\epsilon, which we would expect to increase privacy (which is already very high) but further decrease utility. Instead, we studied higher values of \\\\epsilon in an attempt to improve utility. We observe that even at these higher values, DP can still protect privacy; this is consistent with recent studies that have also shown that higher values of \\\\epsilon can still protect against membership inference attacks (Lowy et al., 2024; Ponomareva et al., 2022). \\n\\n2. Our minimum value of \\\\epsilon = 3 follows established practices in the literature, including Yu et al. (2021), Mehta et al. (2022) and Mattern et al. (2022). This value provides stronger privacy guarantees compared to the one evaluated in Yue et al. (2023), whose differential privacy sanitization method we adopted. This informed our decision to examine epsilon values above 3.\\n\\n\\nReferences \\n\\nYue, Xiang, Huseyin A. Inan, Xuechen Li, Girish Kumar, Julia McAnallen, Hoda Shajari, Huan Sun, David Levitan, and Robert Sim. \\\"Synthetic text generation with differential privacy: A simple and practical recipe.\\\" arXiv preprint arXiv:2210.14348 (2022).\\n\\n\\nMehta, Harsh, Abhradeep Thakurta, Alexey Kurakin, and Ashok Cutkosky. \\\"Large scale transfer learning for differentially private image classification.\\\" arXiv preprint arXiv:2205.02973 (2022).\\n\\nMattern et al., \\\"Differentially Private Language Models for Secure Data Sharing\\\", EMNLP 2022 \\n\\nYu, Da, Saurabh Naik, Arturs Backurs, Sivakanth Gopi, Huseyin A. Inan, Gautam Kamath, Janardhan Kulkarni et al. \\\"Differentially private fine-tuning of language models.\\\" arXiv preprint arXiv:2110.06500 (2021).\\n\\nLowy, Andrew, Zhuohang Li, Jing Liu, Toshiaki Koike-Akino, Kieran Parsons, and Ye Wang. \\\"Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?.\\\" arXiv preprint arXiv:2402.09540 (2024).\\n\\nPonomareva, Natalia, Jasmijn Bastings, and Sergei Vassilvitskii. \\\"Training text-to-text transformers with privacy guarantees.\\\" In Findings of the Association for Computational Linguistics: ACL 2022, pp. 2182-2193. 2022.\\n\\n\\n> where the drop in utility is more significant than the improvement in privacy\\u2026\\n\\nThank you for highlighting this issue. While DP methods achieve strong privacy protection on both MedQA and WildChat, the privacy gains differ specifically due to variations in the privacy protection of fine-tuning approaches. This difference stems from the threat models: in MedQA, we treat both questions and answers as public information, to evaluate the sanitization method's ability to generate context corresponding to correct choices. Conversely, for WildChat, we consider the entire conversation as private information. We hypothesize that this distinction in information availability directly affects the fine-tuning methods' ability to learn private information, explaining the observed differences in privacy gains across our experiments. We will update the manuscript to reflect the discussion, and improve the claim we are making. \\n\\n\\n> I assume that you will add the additional explanations to the paper where they are required/helpful for readers to better/quicker follow the paper.\\n\\nWe thank you again for your constructive feedback, especially regarding clarity improvements. All suggested clarifications have been incorporated into the manuscript, with modifications highlighted in yellow for reference.\"}", "{\"summary\": \"The manuscript seeks to highlight privacy concerns in text-sanitization techniques, by a) proposing a semantic-similarity based privacy metric for re-identification/matching attacks, and b) evaluating state-of-the-art defenses against such inference under the proposed metric.\\n\\nThe authors use a 2-step approach; in the first 'linking' step, sanitized documents are compared to externally known auxiliary information about a target individual using a TFIDF-based sparse retriever, and in the second 'semantic matching' step, a language model is used to assess similarity between atomic claims in the retrieved document and those in the original, unsanitized document.\\n\\nThe paper then evaluates several defense strategies to quantify information leakage under the above framework, and find that DP based methods may provide some of the strongest protections, albeit at the cost of data utility.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": [\"Extremely well-written paper, with clear motivation and research questions.\", \"The figures in the paper are very informative, and do an excellent job of conveying key information. The running examples were great for readability.\", \"Clear problem statement, with adequate background and justification of design choices. Creative use of LLMs as an evaluation for text-coherence.\", \"The results about access to different sets of auxiliary information were really interesting to me. The hypothesis about the non-uniformity of LLMs' instruction-following seems intuitive, but would be interesting to quantify this in its own right.\", \"The human subject experiments were a nice touch - useful to know the capabilities of the two models in this context.\"], \"weaknesses\": \"Can't think of any immediate flaws or weaknesses. Happy to discuss further once other reviews are in.\", \"questions\": [\"Did you find major differences between the two datasets in terms of atomizing claims in documents? It seems to me that this would be more structured in a medical Q&A dataset, as compared to LLM-user interactions.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for your response. This adequately answers my question, and these insights would be a nice addition to the paper.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"This paper investigates the current limitations of existing textual data sanitization methods. By considering re-identification attacks with known auxiliary information, the paper shows that a sparse retriever can link sanitized records with target individuals even though the PII patterns are anonymized. Instead, the paper proposes a new privacy evaluation framework for the release of sanitized textual datasets. The paper considers two datasets, including MedQA and WildChat, to show that seemingly innocuous auxiliary information can be used to deduce personal attributes like age or substance use history from the synthesized dataset. Experimental results also verify that current data sanitization methods create a false sense of privacy only on the surface level.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-written and easy to follow. All the included sanitization methods are up-to-date and well-explained.\\n\\n2. The proposed method is straightforward in decomposing the re-identification with linking and matching methods. \\n\\n3. Experimental results are comprehensive with sufficient ablation experiments. The included baselines are solid and up-to-date.\", \"weaknesses\": \"1. My major concern is that the auxiliary data is highly contrived. Based on my understanding, each auxiliary sample is the subset of exact atomics from the target record. For example, in Fig 1, Auxiliary Information contains two atoms of the Original Record. That is, if you only consider de-identified, sanitized records, it is very easy for your BM25 retriever to get the sanitized target. In real-world re-identification attacks, there is no such auxiliary information that has many exact overlapped n-grams as original records.\\n\\n2. For the claim that 'private information can persist in sanitized records at a semantic level, even in synthetic data,' if you consider DP generation, the privacy level is indicated by $(\\\\epsilon, \\\\delta)$. That is, your linked record may not be the original target sample. DP introduces random noise to offer plausible deniability and protect the original record's privacy.\\n\\n3. The implemented methods for the proposed privacy evaluation framework only integrate various existing components by using the contrived auxiliary data. It is not likely to scale this framework for a large number of overlapped atoms.\", \"questions\": \"Please refer to my weaknesses. Also, I have a few new questions.\\n\\n1) How can your method extend to other datasets? Is there any real auxiliary data that can be used instead of creating overlapped auxiliary data from the original records?\\n\\n2) Regarding the concept of privacy, is converting the age of 23 to early 20s a privacy breach? Such conversion is commonly adopted by K-anonymity.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Acknowledge\", \"comment\": \"Thanks for your rebuttal to clarify a few disagreements. Except I still think that the auxiliary data is highly contrived, I agree with the other comment. So, I will raise my review score.\"}", "{\"title\": \"Response to Reviewer Jmiv Part 2\", \"comment\": \"> The effect of \\\\epsilon appears surprisingly small to me, with only minimal changes across all metrics even when comparing \\\\epsilon=3 and 1024. Can you explain this behavior?\\n\\nThe fact that privacy preserving techniques with extremely large $\\\\epsilon$ have a significant effect, and at the same time the value of $\\\\epsilon$ has relatively small effect has been observed in other applications of differential privacy also. For example, it is widely known in the DP community that adding DP with very large $\\\\epsilon$ significantly mitigates Membership Inference Attacks (MIA) as measured by standard MIA metrics. We believe that this is partly due to the fact that DP-SGD methods use clipping (even for very large $\\\\epsilon$), which already significantly reduces the influence of any single sample for any $\\\\epsilon$ values.\\n\\nIn addition, there is a significant difference between our threat model and the strong adversarial model assumed in DP. While DP provides worst-case privacy guarantees against an adversary who knows all but one record in the dataset, our threat model considers a substantially weaker adversary who only has access to partial information from a single record. Additionally, the DP-SGD training framework we adopted in the paper composes privacy costs from each optimization step, assuming the adversary can observe gradients throughout training. In contrast, our threat model only allows the adversary to access the final sanitized dataset, not the resulting model and let alone the training process. This further reduces the effective strength of the attack. This finding aligns with recent findings that demonstrate DP's effectiveness against membership inference attacks even with larger epsilon values (Lowy et al., 2024)\\n\\n\\nReference\\n\\nLowy, A., Li, Z., Liu, J., Koike-Akino, T., Parsons, K., & Wang, Y. (2024). Why Does Differential Privacy with Large Epsilon Defend Against Practical Membership Inference Attacks?. arXiv preprint arXiv:2402.09540.\", \"below_are_our_responses_to_your_remaining_questions\": \"> Also, what would happen if you shifted the (apart from the ordering: somewhat arbitrarily) assigned scores for the \\\"similarity metric\\\" in Section 2.4 from {1,2,3} to {0,1,2} or to {1,10,100}?\\n\\nWe apologize for not explaining this clearly in the paper. The privacy score in our framework is normalized to a [0,1] range, where 1 represents complete privacy and 0 represents no privacy. Shifting the scoring scale from {1,2,3} to {0,1,2} would not affect our final results, as the normalization process preserves the relative distances between scores. However, using highly uneven spacing like {1,10,100} could affect the results by introducing non-linear weighting between different privacy levels. We maintained equal intervals in our scoring to ensure consistent sensitivity across all privacy levels.\\n\\n> L081, L099: Just to be sure: If I understood correctly, \\\"claims\\\" can refer to any sensitive or non-sensitive information in the original texts?\\n\\nYes, claims encompass any discrete piece of information from the original text, whether sensitive or non-sensitive. \\n\\n> L101: If (1) PII removal had 94% leakage (inferable claims), and (2) data synthesis (with or without DP?) has 9% lower leakage (i.e., 85% ?), why does (3) data synthesis without DP state 57% << 85% leakage?\\n\\nWe apologize for the confusion. The term \\\"identifier removal\\\" in line 101 refers to the broad category of all identifier removal methods, not just PII removal. Our results compare two main categories of data sanitization: identifier removal methods and data synthesis methods. The 9% improvement in privacy protection refers specifically to the difference between Dou et al.'s (2024) identifier removal method, which showed the best performance among removal techniques, and the data synthesis approach.\\n\\n> L146: Could you briefly motivate the use of the sparse BM25 retriever? Did you consider dense retrieval methods (say, using some form of text embeddings)? What are the benefits of BM25 vs. other sparse methods, or of sparse methods vs. dense methods, in particular for the linkage task at hand?\\n\\nWe initially implemented dense retrieval using state-of-the-art dense retriever models GritLM (Muennighoff et al., 2024), but we found that the BM25 sparse retriever performed on average 16% better than dense approach in the MedQA dataset, and they performed similarly in the WildChat dataset. \\n\\nReference\\n\\nMuennighoff, Niklas, Hongjin Su, Liang Wang, Nan Yang, Furu Wei, Tao Yu, Amanpreet Singh, and Douwe Kiela. \\u201cGenerative Representational Instruction Tuning.\\u201d arXiv, April 17, 2024.\"}", "{\"title\": \"Response to Reviewer oF1E Part 2\", \"comment\": \">\\u201dRegarding the concept of privacy\\u201d\\n\\nWhether converting \\\"23\\\" to \\\"early 20s\\\" constitutes a privacy breach depends on context and potential harm (Shao et al., 2024). Our framework doesn't make this normative judgment - instead, it measures information persistence after sanitization. We find concerning pattern preservation even with aggressive sanitization: medical records maintain linked combinations of symptoms, age ranges, and conditions (Zhang et al., 2024), while chat data preserves writing styles and topic preferences. These persistent patterns enable re-identification through modern machine learning techniques.\\n\\nOur results reveal fundamental limitations in current text privacy approaches, demonstrating the need for more sophisticated protection mechanisms that consider semantic-level information leakage. This is particularly crucial as organizations increasingly handle sensitive text data across healthcare, customer service, and other privacy-critical domains, and as private user data holds the key to unlocking new model capabilities.\\n\\n\\nReference\\n\\nZhang, Z., Jia, M., Lee, H. P., Yao, B., Das, S., Lerner, A., ... & Li, T. (2024, May). \\u201cIt's a Fair Game\\u201d, or Is It? Examining How Users Navigate Disclosure Risks and Benefits When Using LLM-Based Conversational Agents. In Proceedings of the CHI Conference on Human Factors in Computing Systems (pp. 1-26).\\n\\nShao, Y., Li, T., Shi, W., Liu, Y., & Yang, D. (2024) PrivacyLens: Evaluating Privacy Norm Awareness of Language Models in Action. In The Thirty-eight Conference on Neural Information Processing Systems Datasets and Benchmarks Track.\"}", "{\"title\": \"Response to Reviewer oF1E Part 1\", \"comment\": \"We thank the reviewer for the helpful feedback and highlighting our strengths, including the comprehensive benchmarking of sanitization methods, straightforward pipeline, comprehensive experiments and ablations, and easy to follow write-up. We hope the explanations below address the reviewer\\u2019s remaining concerns and questions.\\n\\n>\\u201dMy major concern is that the auxiliary data is highly contrived.\\u201d\\n\\nThank you for raising the concern about the lexical overlaps between the auxiliary data and original records. We agree that in realistic applications, auxiliary information can come in more nuanced or complex formats. However, in our experiments, we use the same auxiliary information to highlight differences between existing lexical-based and proposed semantic-based privacy metrics, showing that our semantic-based metric uncovers more leakage. However, if the auxiliary information is more nuanced\\u2014e.g. semantically similar to the original atoms\\u2014it becomes even harder for privacy metrics to detect, further demonstrating our point of these lexical methods providing a \\u201cfalse sense of privacy.\\u201d\\n\\nWhile our auxiliary setup may appear contrived, privacy guarantees must account for worst-case scenarios (Dwork & Roth, 2014). Real-world privacy breaches like the Netflix Prize de-anonymization (Narayanan & Shmatikov, 2008) demonstrate how seemingly innocuous auxiliary information enables re-identification. Our ablation studies validate framework robustness across varying information settings: MedQA shows linking rates of 58-78% for LLM-based sanitization and 81-94% for PII removal, while WildChat maintains consistent rates of 56-62% across methods. This variation in success rates indicates our framework captures meaningful privacy risks rather than artificially inflated matches.\\n\\nReference\\n\\nNarayanan, Arvind, and Vitaly Shmatikov. \\\"Robust de-anonymization of large sparse datasets.\\\" In 2008 IEEE Symposium on Security and Privacy (sp 2008), pp. 111-125. IEEE, 2008.\\n\\nDwork, Cynthia, and Aaron Roth. \\\"The algorithmic foundations of differential privacy.\\\" Foundations and Trends\\u00ae in Theoretical Computer Science 9, no. 3\\u20134 (2014): 211-407.\\n\\n\\n\\n>\\u201dFor the claim that 'private information can persist in sanitized records at a semantic level,\\u201d\\n\\nWhile differential privacy provides formal guarantees through $\\\\epsilon$, its practical implications for language models remain unclear (Habernal, 2022). Different $\\\\epsilon$ values have ambiguous meaning for text privacy - our work provides empirical quantification of these guarantees. With $\\\\epsilon$=1024, we observe improved privacy scores (0.92 from 0.43) but significant degradation in both task performance (0.62 to 0.40) and text coherence (3.44 to 2.25). This aligns with recent findings showing that DP's theoretical guarantees may not translate directly to meaningful privacy protection in high-dimensional text data (Brown et al. 2022).\\n\\nReference\\n\\nBrown, H., Lee, K., Mireshghallah, F., Shokri, R., & Tram\\u00e8r, F. (2022, June). What does it mean for a language model to preserve privacy?. In Proceedings of the 2022 ACM conference on fairness, accountability, and transparency (pp. 2280-2292).\\n\\nHabernal, Ivan. \\u201cWhen Differential Privacy Meets NLP: The Devil Is in the Detail.\\u201d In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, 1522\\u201328. Online and Punta Cana, Dominican Republic: Association for Computational Linguistics, 2021. https://doi.org/10.18653/v1/2021.emnlp-main.114.\\n\\n\\n\\n\\n>\\u201dIt is not likely to scale this framework for a large number of overlapped atoms.\\u201d\\n\\nOur framework demonstrates practical effectiveness using just 3 claims for meaningful privacy evaluation, contradicting concerns about scalability. This efficiency stems from our novel semantic matching approach (detailed in Section 3.2) which captures information leakage without requiring exhaustive claim combinations. The framework adapts naturally across domains - medical records separate into symptoms, history, and demographics (average 15.6 claims/document), while conversational data follows dialogue structure and topic boundaries (Wang et al., 2023).\\n\\nReference\\n\\nWang, X., Wei, J., Schuurmans, D., Le, Q., Chi, E., Narang, S., ... & Zhou, D. (2022). Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171.\"}", "{\"title\": \"Response to Reviewer 18vb Part 2\", \"comment\": \"> \\u201cThe numerical results rely on LLM output \\u2026\\u201d\\n\\nWe conducted thorough human evaluation showing strong agreement (0.93 Spearman Correlation) between annotators and LLM judgments, validating our approach. LLM-based evaluation is increasingly accepted in the research community (Chiang & Lee, 2023; Zheng et al., 2023), with recent work using similar approaches for code similarity assessment (Chon et al., 2024), text generation evaluation (Wang et al., 2023), and information extraction validation (Hsu et al., 2024). Our choice to query LLaMA three times is supported empirically by a range of prior works on self-consistency of LLM prompting (Wang et al., 2022). There is significant instruction-following inconsistency with single queries (where the agreement drops to 0.84 Spearman Correlation). \\n\\nWe would be happy to clarify any of these points further or provide additional details about specific aspects of our methodology.\"}" ] }
04TRw4pYSV
Dual-Modality Guided Prompt for Continual Learning of Large Multimodal Models
[ "Fanhu Zeng", "Fei Zhu", "Haiyang Guo", "Xu-Yao Zhang", "Cheng-Lin Liu" ]
Large Multimodal Models (LMMs) exhibit remarkable multi-tasking ability by learning mixed datasets jointly. However, novel tasks would be encountered sequentially in dynamic world, and continually fine-tuning LMMs often leads to performance degrades. To handle the challenges of catastrophic forgetting, existing methods leverage data replay or model expansion, both of which are not specially developed for LMMs and have their inherent limitations. In this paper, we propose a novel dual-modality guided prompt learning framework (ModalPrompt) tailored for multimodal continual learning to effectively learn new tasks while alleviating forgetting of previous knowledge. Concretely, we learn prototype prompts for each task and exploit efficient prompt selection for task identifiers and prompt fusion for knowledge transfer based on image-text supervision. Extensive experiments demonstrate the superiority of our approach, e.g., ModalPrompt achieves +20% performance gain on LMMs continual learning benchmarks with x1.42 inference speed refraining from growing training cost in proportion to the number of tasks. The code will be made publically available.
[ "Continual learning", "Large multimodal models", "Efficient learning", "Prompt learning" ]
https://openreview.net/pdf?id=04TRw4pYSV
https://openreview.net/forum?id=04TRw4pYSV
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mcJu2ocADn", "S3UGrkl7Xl", "P9EzJe3j23", "8ThdxILtvX", "05aw6LuTRT" ], "note_type": [ "official_review", "official_review", "comment", "official_review", "official_review" ], "note_created": [ 1730571153912, 1730016439970, 1731652583062, 1730096419634, 1730698012588 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission5573/Reviewer_Nsh2" ], [ "ICLR.cc/2025/Conference/Submission5573/Reviewer_d96g" ], [ "ICLR.cc/2025/Conference/Submission5573/Authors" ], [ "ICLR.cc/2025/Conference/Submission5573/Reviewer_JJqr" ], [ "ICLR.cc/2025/Conference/Submission5573/Reviewer_roZe" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes a continual learning scheme for LMMs based on prompt selection and fusion. Experiments on eight datasets show the effectiveness of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"In large models like LLMs and LMMs, learned prompts serve as new \\\"viewpoints\\\" that enhance the performance of the underlying LMMs on specific tasks. I believe exploring prompt-based \\\"continued learning\\\" techniques can be practically beneficial, especially with the availability of powerful LMMs.\", \"weaknesses\": [\"The paper is difficult to read, as it presents simple ideas in an abstract and complex manner. It requires a substantial revision before one can properly evaluate its soundness and contribution.Thus, I do not believe it is ready for publication at ICLR. Here are some areas of confusion I encountered:\", \"In line 161, it states \\u201cThe characteristic of LMM continual learning includes: \\u2026\\u201d It is unclear whether the authors refer to a general consensus on LMM continual learning or their specific proposals.\", \"The summation in Eq.(3) lacks a dummy variable. Are you summing over individual prompts within a set for a specific task $t$?\", \"Consider using $\\\\bar{x}$ for the average of prompts, as bold symbols could be confusing since they typically represent vectors.\", \"In line 201, the projection should be defined as $\\\\text{Proj}_v(\\\\cdot):\\\\mathbb{R}^{d_v}\\\\rightarrow\\\\mathbb{R}^{d_t}$.\", \"In Eq.(7), What is $X_p$? Is it the collection of all prompts? It's unclear how prompts are selected in your process.\", \"One possible understanding: You have $N$ prompts for each of the $T$ tasks, so $T\\\\times N$ in total. The selection is performed over all the $T\\\\times N$ and produce $k$ most relevant ones.\", \"Line 242 states, \\u201cTo enhance knowledge transfer, the dual-modality features could serve as guiding cues for prompts to accurately get close to multimodal distributions of current task in feature space.\\u201d What are the dual-modality features? Are they the features of the current task? What do you mean by \\u201cmultimodal distributions\\u201d? I don't think those terminologies are self-explanatory and commonly used in the field. Why is the closeness to the distribution helpful in enhancing knowledge transfer?\", \"Eq.(9) abuses the symbol $\\\\mathbf{x}^t_p$ for prototype features, the same term is used for the \\u201cprompt features\\u201d in Eq.(3).\", \"In Eq.(10) what are the definitions of $\\\\alpha^{\\\\le t}$ and $\\\\beta^{\\\\le t}$? What is the shape of $\\\\tilde{X}^t_p$?\", \"In line 265, where do you define the parameters $\\\\theta_p^t$ of prototype prompts?\", \"In Table 1, what is the metric of the first two methods?\", \"In Table 2, what do $B_i$ and $M_i$ represent in the second row?\", \"Previous text implies that \\u201cnumber of selection prompts $k$\\u201d refers to selecting the top-k most similar prompts. However, by line 448-455, it seems $k$ refers to the number of sets of prototype prompts. Which is the correct understanding?\", \"Line 456 is confusing when it mentions \\u201cchoosing three sets of prototype prompts.\\u201d Based on subsection 3.2 (line 237, \\u201cwe term set of prompt for each task as prototype prompts\\u201d), shouldn\\u2019t the number of prototype prompt sets match the number of tasks, which is eight?\", \"In Fig.5, it is not clear what quantity is plotted. Is it the average similarity between the prototype features and task features across all in task samples and targeting prototypes?\", \"In addition, the visualization subsection at P.10 provides little information. Cherry-picking examples do not represent the overall behavior of your model. and I don't understand how these examples support the claim that your model retains previously learned knowledge.\"], \"questions\": [\"I will ask the authors to revise the entire paper to clarify their method and arguments.\", \"In the main text the authors repeatedly emphasize that their method is time-efficient in the sense that the time complexity of inference depends on the number of selected prompts rather than tasks. However, I find this unclear. First, during the inference for each task sample, one needs to compute the similarity with all the prompts, whose number equals to the number of tasks. If we disregard such selection computation, why should other methods exhibit an $O(N_{task})$ time complexity?\", \"To illustrate the importance of the dual-modality guidance, the authors compared the full results with those from using only image or text modalities. This comparison could be biased, as it relies solely on $\\\\alpha$ or $\\\\beta$ for prompts selection in the latter case. To ensure fairness, for example, one could use two different text encoders to obtain two estimates of text-based similarities $\\\\beta$ and $\\\\beta'$. This allows for a comparison of results using $\\\\alpha + \\\\beta$ with those using $\\\\beta + \\\\beta'$. Can you carry out this comparison and show the results?\", \"There seems to be a discrepancy between results in Fig.5 and Fig.6: GQA task features show their highest similarity with ImageNet prototype features (Fig. 5). yet the selected prototype prompts are primarily from the GQA task (Fig. 6).\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper explores continual learning in Large Multimodal Models, focusing on the challenge of enabling models to continuously learn across sequential tasks. The authors critically assess the limitations of existing approaches and propose a novel dual-modality guided prompt learning framework for multimodal continual learning. Extensive experiments show that the proposed method significantly enhances both performance and inference speed.\", \"soundness\": \"3\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"1. The question investigated in this paper is critical and significant in the current deep learning community.\\n2. The paper proposes a novel prompt learning framework for rehearsal-free continual learning of LMMs.\\n3. They conduct extensive experiments to demonstrate the effectiveness and inference speed of proposed methods.\", \"weaknesses\": \"1. Although the experiment improves the performance and inference speed, the proposed method involves modality-specific prompts for each task, which is too simple compared to existing work that devises advanced prompt strategies in visual scenarios. Simultaneously, they lack of comparison with the amount of prompt-based methods. Such as: DualPrompt [1], L2P[2], CODA-Prompt[3].\\n2. There exist some typos in the paper:\\n 1. in line 100, `prpredominant'.\\n 2. in line 128, ... set `prompt of prompts' ... \\n3. The author proposes the setting of refrains from computation expansion in proportion to the number of tasks. Whether means we can continuously learn the sequential data in one model and the performance will continuously improve. In other words, how many tasks can the proposed method effectively handle within one model?\\n4. In the experiment, there is a lack of results that compare one task in the continuous process, i.e. compare the performance at the time axes, which directly reflects the transfer capability when more previous knowledge is learned.\\n5. There is no difference between the two items in equation 12 with the add operation.\\n6. How does the proposed method assess forgetting? Does it require saving a lightweight projection layer for each task, or should the projection layer from a previous task be re-tunned after learning a new one?\\n7. In Line 203, why does the encoder of visual E_I and textual E_T in CLIP realize the mapping of E_I(\\u00b7) : R^{n_v \\u00d7d_v} \\u2192R^{d_v} ,E_T(\\u00b7) : R^{n_t \\u00d7d_t} \\u2192R^{d^t}, which should exist error description?\\n\\n[1]. DualPrompt: Complementary Prompting for Rehearsal-free Continual Learning\\n\\n[2]. Learning to Prompt for Continual Learning\\n\\n[3]. CODA-Prompt: COntinual Decomposed Attention-based Prompting for Rehearsal-Free Continual Learning\", \"questions\": \"See weakness.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This paper presents MODALPROMPT, a dual-modality guided prompt framework designed to address catastrophic forgetting in large multimodal models (LMMs) during continual learning. LMMs, which integrate visual and textual processing capabilities, encounter performance degradation when sequentially learning new tasks. To address this, MODALPROMPT leverages dual-modality (image and text) prompt learning to enable continual learning without task-specific expansion or data replay, which can be resource-intensive and raise privacy issues. By combining task-specific prototype prompts with a selection mechanism informed by image-text distributions, the model achieves improved task retention and transfer of knowledge across a variety of multimodal benchmarks.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"1.\\tIntroduces an innovative, data-efficient solution to catastrophic forgetting, critical for LMM applications in dynamic task environments.\\n2.\\tDemonstrates strong empirical performance with improvements across key continual learning metrics.\\n3.\\tEfficient design enables lower computational cost, making it scalable for broader application.\", \"weaknesses\": \"1.\\tThe baseline lacks a comparison with other prompt learning methods.\\n2.\\tComplexity in configuring prompt numbers and selection features may limit broader accessibility without further simplification or automation.\\n3.\\tModalPrompt needs to convincingly differentiate itself from prior work in prompt-based continual learning, likely through robust comparative experiments and ablations.\", \"questions\": \"1.\\tThere are numerous methods for multimodal prompt learning. Did the authors explore other approaches, and if so, how effective were they?\\n2.\\tAdditionally, why does the baseline comparison only include the LoRA method? Are there other fine-tuning methods considered? Could a direct comparison between LoRA and prompt learning be potentially unfair? \\n3.\\tIs there any comparison of FPS, storage, and speed?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper proposes a dual-modality guided prompt learning framework (ModalPrompt) tailored for multimodal continual learning to effectively leran new tasks while alleviating forgetting of previous knowledge. Extensive experiments demonstrate the superiority of the proposed method.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper is well written and is the first prompt learning framework for rehearsal-free continual learning of LMMs. The experimental results show a significant improvement, with comparisons conducted across various tasks and datasets.\", \"weaknesses\": \"1. The proposed method lacks substantial novelty, as prompt learning has already been widely used in fine-tuning pre-trained vision-language models in the continual learning setting.\\n2. The baseline is too weak, thus the effectiveness of the method is not very convincing. For example, the baseline accuracy of zero-shot on the REC task is 0.00.\", \"questions\": \"1. Prompt-based continual learning methods like L2P[1], DualPrompt[2], S-Prompts[3] and HiDe-Prompt[4] employ various prompt design and selection strategies. As for the prompt design, how does this paper demonstrate the superiority of the proposed method?\\n2. Is there a writing error in Equation 12? This loss aims to increase the similarity between $x^t_P$ and $x_{instruct}$; however, as $x^t_P$ and $x_{instruct}$ become more similar, it means the prompt cannot provide additional information, which would be detrimental to prompt learning.\\n\\n[1] Wang Z, Zhang Z, Lee C Y, et al. Learning to prompt for continual learning[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 139-149.\\n\\n[2] Wang Z, Zhang Z, Ebrahimi S, et al. Dualprompt: Complementary prompting for rehearsal-free continual learning[C]//European Conference on Computer Vision. Cham: Springer Nature Switzerland, 2022: 631-648.\\n\\n[3] Wang Y, Huang Z, Hong X. S-prompts learning with pre-trained transformers: An occam\\u2019s razor for domain incremental learning[J]. Advances in Neural Information Processing Systems, 2022, 35: 5682-5695.\\n\\n[4] Wang L, Xie J, Zhang X, et al. Hierarchical decomposition of prompt-based continual learning: Rethinking obscured sub-optimality[J]. Advances in Neural Information Processing Systems, 2024, 36.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
04RLVxDvig
NanoMoE: Scaling Mixture of Experts to Individual Layers for Parameter-Efficient Deep Learning
[ "Lin Chen", "Kaiyuan Wang", "Gang Fu", "Mohammadhossein Bateni", "Vahab Mirrokni" ]
Large language models (LLMs) have achieved remarkable success, but their growing size leads to significant challenges in efficiency and cost. This work explores parameter-efficient deep learning, aiming to achieve comparable performance with fewer parameters and floating-point operations (FLOPs). We introduce NanoMoE, a novel family of parameter-efficient building blocks inspired by the Mixture of Experts (MoE) framework. NanoMoE offers a modular and efficient replacement for fully connected layers within traditional neural networks. We instantiate NanoMoE with three variants of increasing complexity and theoretically demonstrate its superior expressivity compared to low-rank factorization with minimal parameter increase. Empirical results validate that NanoMoE achieves superior model quality compared to low-rank factorization under the same parameter or FLOP budget, confirming its enhanced efficiency.
[ "Mixture of Experts", "Parameter Efficiency", "Expressivity", "Low-Rank Factorization" ]
Reject
https://openreview.net/pdf?id=04RLVxDvig
https://openreview.net/forum?id=04RLVxDvig
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yVokQ8YrVV", "txvp8fjE72", "sWMV7xoiVr", "e0uRBLmB3I", "XY5fBijfiM", "0XW1B5Dqlu" ], "note_type": [ "meta_review", "official_review", "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1734716192010, 1730044512970, 1737524090926, 1730161371465, 1730046374732, 1730696784966 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10906/Area_Chair_3Jcn" ], [ "ICLR.cc/2025/Conference/Submission10906/Reviewer_cSpB" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10906/Reviewer_53vK" ], [ "ICLR.cc/2025/Conference/Submission10906/Reviewer_tHzT" ], [ "ICLR.cc/2025/Conference/Submission10906/Reviewer_P46a" ] ], "structured_content_str": [ "{\"metareview\": \"**Summary**\\n\\nThis paper presents NanoMoE, a new class of structured matrices that offers enhanced flexibility over low-rank matrices while minimally increasing the number of parameters or FLOPs. Theoretical evidence demonstrates that NanoMoE can achieve a significantly higher rank and provides greater flexibility than low-rank matrices for comparable parameter counts. Experimental results on small scale problems further confirm that NanoMoE layers outperform low-rank layers in terms of performance.\\n\\n**Strengths**\", \"the_reviewers_unanimously_highlighted_several_strengths_of_the_proposed_framework\": [\"NanoMoE is a novel family of structured matrices with clear theoretical advantages over low-rank matrices in terms of expressiveness, especially in achieving higher ranks for the same parameter count. This factorization and aggregation technique offers a fresh perspective on optimizing neural network architectures.\", \"The paper rigorously proves the said advantages of NanoMoE\", \"Experiments support the theory and show performance improves as a function of FLOPs, which is important to develop scalable methods, especially for pre-training.\", \"**Weaknesses**\", \"Several core weaknesses was brought up by the reviewers. These include:\", \"Although the primary goal is to reduce computational cost and memory usage, the experiments focus mainly on parameter reduction. The lack of discussion on whether the method improves inference speed limits the assessment of its practical efficiency gains.\", \"The authors conduct only two experiments on a single dense layer or a simple model, which makes it challenging to evaluate the method\\u2019s applicability to real-world deployment and its performance in more complex scenarios.\", \"**Conclusion**\", \"The majority of reviewers acknowledge the merits of the paper but criticize the experimental setup as rudimentary and inconclusive. Unfortunately, the authors did not submit a rebuttal. Given the unanimous consensus among the reviewers favoring rejection, I also vote to reject this paper.\"], \"additional_comments_on_reviewer_discussion\": \"Given the unanimous vote of the reviewers and the lack of responses from the authors, we did not find it necessary to further discuss this paper.\"}", "{\"summary\": \"The authors propose to extend low-rank approximation of standard neural network weight matrices of the form W=UV into W = blockdiag(U) M blockdiag(V) where blockdiag(U) is a block diagonal reshaping of the original matrix U. M is a block matrix interpreted as expert weights for each possible combinations of subblocks from blockdiag(U) and blockdiag(V). The M matrix is parametrized in three different ways (scalar times identity, diagonal, diagonal plus outer product) with increasing expressivity proved theoretically but also increasing computational cost. The authors empirically validate that the proposed approach is better than low-rank in terms of train/test loss for 1) a synthetic data setting when controlling parameters and FLOPs, 2) AG news classification when controlling for parameters and FLOPs.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea is quite relevant to lots of on-going works that replace dense matrices with different structure matrices for improved performance. I think it\\u2019s a nice addition to the community.\", \"The paper is well-written and easy to follow\"], \"weaknesses\": \"The primary problem with all of the empirical evaluations in this paper is that they are not informative about whether the proposed approach is actually a good replacement for standard MoE layers or not. The baseline is just low-rank, which is shown to be less expressive already compared to the proposed nanoMoE. It\\u2019s essential to compare to a standard MoE where you\\u2019re not using any low-rank structure but with just dense matrices.\\n\\nIt\\u2019s also not surprising that when controlling for parameters, nanoMoE is performing better than low-rank since the parameter overhead introduced by K and r are relatively small (the authors sweeped over small values of K). \\n\\nI\\u2019m willing to change my scores if the authors add the dense matrix $W\\\\in\\\\mathbb{R}^{d_2\\\\times d_1}$ baseline and the standard MoE with dense matrices baseline, at least in a limited setting if the compute budget is a problem during rebuttal.\", \"questions\": \"What\\u2019s the loss function in the synthetic dataset experiments where you are sampling i.i.d gaussian random vectors of dimension 20480? The text mentions that it\\u2019s testing the FC layer from OPT-13B, which is a language model. It\\u2019s not clear to me what\\u2019s the training objective function here.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The paper introduces NanoMoE, a building block which adds an additional mixing layer to low-rank factorization layers for linear projections. The paper draws connections to the mixing matrix from the mixture of expert literature, and characterises the space of matrices it can represent. Finally the authors test the proposed method on a synthetic task on various FLOPs budgets and on the AG news classification task.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"The method to mix low-rank factorization intermediate outputs is interesting.\", \"The paper emphasizes how performance improves as a function of FLOPs, which is important to develop scalable methods, especially for pre-training.\"], \"weaknesses\": [\"**Conceptual Framing**\", \"The connection to the sparsely-gated mixture of experts literature is very weak. As the method is described in section 3, the matrix M performs mixing over the partitioned input x_in, this is very different from [1] which is referenced in section 1, where specific components of the network learn to route inputs to \\u201cexperts\\u201d. Nanomoe rather does a sort of sparse mixing over the embedding dimension, where no sparse routing or \\u201cexpertise\\u201d is learned.\", \"The claims in the paper are too overreaching.\", \"Mixture of expert layers and sparse layers have already been applied to individual layers in prior work, see [1] and [2] [3] for preliminary references, a more thorough literature review should be in the paper. This work does not scale more than previous work in terms of applying them to whole components of the network, or in the experimental setting size (which is very modest in NanoMoE).\", \"Section 1 says \\u201cWe formally define this problem as parameter-efficient deep learning\\u201d. There is not a formal definition of this in the paper, just a formal definition of the proposed method. Also, more related to the proposed approach, is sparse deep learning, which has a rich body of literature. The paper hardly proposes a new problem that is not already known or tackled in the deep learning literature.\", \"There is no discussion about hardware efficiency other than a very loose definition of FLOPs in numpy. Real-world hardware efficiency is necessary to scale up methods as implied in the title.\", \"The Monarch matrices line of work [5] [6] [7] seems very relevant to this work (it is not cited), as it deals with a more efficient building blocks with block-diagonal matrices, with detailed discussion on expressiveness, efficiency, experimental work and covering a super-set of the scope of this paper (both pre training with mixed sparse+dense, and finetuning of dense models). I highly recommend the authors to review [5] as a blueprint for this work. It\\u2019s worth a discussion of the differences between both methods, both in terms of modelling and hardware efficiency for pretraining; at the very least, this seems like an important baseline to have in the experimental section.\", \"**Experiments**\", \"I found the first experiment from the OPT-13b layer very confusing. There is no description about what loss is being optimised, which makes it very difficult to interpret the results \\u2014 the losses in Figure 2) and b) seem high but without any description it is not possible to know if any of the models is learning anything useful at all. Moreover, the input is random gaussians with a rather high standard deviation, again without any explanation, this task does not seem to be representative of a real training task at all.\", \"The experiments compare to Low-Rank training as a baseline. However, a more important comparison to do is with a fully dense layer, which is what actually is commonly used in pre-training (which the paper advocates for in Section 1). Also, the related work section describes a number of models that would be important baselines to compare to, low-rank is arguably a simple baseline and not SOTA.\", \"For the AG News classification dataset, there\\u2019s several important experimental details missing, which are crucial to understand the empirical merit of NanoMoE:\", \"What is the loss being optimised?\", \"What are the details of the vectorization layer? What\\u2019s the vocabulary size? How are words out of vocabulary handled?\", \"How many epochs/steps occur during training?\", \"What is the optimizer and what hyper-parameters are used? (batch size, learning rate, regularisation, etc)\", \"How are the weights initialised in the NanoMoE layers? More generally speaking, which hyper-parameters are different in NanoMoE vs the low-rank baseline?\", \"What is the granularity of the K and r ranges?\", \"What is the activation function used in the experiments?\", \"[7] shows that a careful parametrization is needed for structured matrices. This is a missing detail on the hyper-parameters, but also a missing discussion for NanoMoE too.\", \"Figures 4) and 5) are hard to visualise with all the data points being very transparent. There is a lot of variance per Flop Budget, which probably is due to interactions of K and r. It is important to disentangle these effects as well.\", \"Plotting the low envelope seems to ignore the fact that NanoMoE is overfitting at higher FLOP counts on figure 5b (if that\\u2019s not the case, the colours are making this difficult to interpret). Is NanoMoE more prone to overfitting at higher FLOP budgets? If it is, then the method is not very promising, it could also be a lack of proper regularisation, but this is not clear given the lack of experimental details.\", \"Modern NLP solves classification problems such as AG News with unsupervised pre-training + transfer-learning (BERT-style models) or few-shot learning (GPT-style models). While large-scale pre-training is very expensive, there is work to pretrain BERT-style models in as little as 1 GPU day [4] which is more suitable to academic budgets. A *single* and small-scale experiment on this unsupervised learning setup, would be more apt to compare to modern methods in NLP (this can very well be the single best combination from the AG news experiment).\", \"The definition of FLOPs seem to focus on inference considerations, as I think it computes the output of numpy.einsum_path over a single einsum operation (is not clear what operations are included in the call to enisum_path, a spelled out code snippet would be useful). However, this paper focuses as per section 1 on efficient pre-training. This calls for a definition of FLOPs per training step, which includes: forward and backward FLOPs, runtime bounds such as given in [5], and practical step time on modern accelerators. A number of these can be future work, but it needs to be disclosed explicitly in order to consider the merits of the paper.\", \"All in all, I consider the experimental section to be too weak to claim this in the conclusion: \\u201cour empirical results consistently validate that NanoMoE achieves superior performance\\u201d. More thorough experiments need to be done before claiming this.\", \"[1] https://openreview.net/forum?id=B1ckMDqlg\", \"[2] https://proceedings.mlr.press/v162/dao22a/dao22a.pdf\", \"[3] https://openreview.net/forum?id=-b5OSCydOMe\", \"[4] https://proceedings.mlr.press/v202/geiping23a/geiping23a.pdf\", \"[5] https://proceedings.mlr.press/v162/dao22a/dao22a.pdf\", \"[6] https://openreview.net/forum?id=cB0BImqSS9&noteId=98BZANkxc8\", \"[7] https://proceedings.mlr.press/v235/qiu24f.html\"], \"questions\": [\"What are the hyper-parameters used to conduct the experiments? See weaknesses section for what it\\u2019s relevant to discuss. What\\u2019s the sensitivity of NanoMoE to hyper-parameters?\", \"What is the loss function used to optimise the first experiment of section 4? What is this experiment trying to show (irrespective of matching conclusions with the AG news experiment)?\", \"What is the runtime of NanoMoE compared to dense matmuls either with low-rank or not? How complicated is it to run this efficiently in modern accelerators? Is this future work?\", \"Is NanoMoE more prone to overfitting in the experiments?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper introduces a novel variant of the mixture of experts model aimed at reducing the number of parameters and floating-point operations (FLOPs) in neural networks. This is achieved by factorizing individual feedforward layers and their corresponding input feature maps into smaller chunks, and then aggregating their outputs. By applying this operation to dense layers, the method significantly reduces parameter count and memory requirements while maintaining competitive performance levels\\n\\nThe paper\\u2019s main contributions include: introducing NanoMoE, a parameter-efficient block family with three complexity levels (NanoMoE-I, II, and III); proving NanoMoE\\u2019s higher expressivity over low-rank factorization with minimal parameter increase; and validating through experiments that NanoMoE achieves better model quality than low-rank factorization at similar resource budgets.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The strengths of this paper can be summarized as follows:\", \"Efficiency in Resource Usage: The proposed method effectively reduces the number of parameters and computational demands, making it suitable for deployment in resource-constrained environments.\", \"Maintained Performance: Despite the reduction in computational resources, the model achieves results that are competitive with more resource-intensive approaches.\", \"Innovative Approach: The factorization and aggregation technique offers a fresh perspective on optimizing neural network architectures.\"], \"weaknesses\": [\"Although the idea is interesting, the proposed method has several major weaknesses:\", \"**Lack of Inference Speed Evaluation**: While the main objective is to reduce computational cost and memory footprint, the experiments focus primarily on parameter reduction. There is no discussion of whether the method improves inference speed, which is critical for assessing practical efficiency gains.\", \"**Limited Experimental Scope**: The authors conduct only two experiments on a single dense layer or a simple model, making it difficult to assess the method\\u2019s feasibility for real-world deployment and its performance in more complex scenarios.\", \"**Narrow Evaluation Metrics**: The evaluation is limited to loss reduction without considering classification accuracy, which would be valuable for classification tasks. Including transfer learning experiments would further help to gauge the method\\u2019s effectiveness across tasks.\", \"**Absence of Baseline Comparison**: The approach of weight partitioning and non-gated mixtures of experts is not new[1]. Comparisons with existing methods that use similar techniques, such as [1] focusing on parameter reduction, would provide clearer insights into the proposed method\\u2019s relative performance and innovation.\", \"[1] Scaling Laws for Fine-Grained Mixture of Experts\"], \"questions\": [\"**Questions:**\", \"How were the hyperparameters chosen? Was any analysis conducted to determine optimal values, especially for selecting the dense layer in synthetic data experiments?\", \"What prevented the focus from extending to multiple FFN layers? Was this due to increased complexity, as each dense layer would require a similar setup?\", \"Why has NanoMoE not been tested on more complex architectures beyond single dense layers? How do the authors envision scaling it for larger models?\", \"Is there a reason NanoMoE did not incorporate sparse gating mechanisms, as seen in other MoE frameworks?\", \"How does NanoMoE compare with other parameter-efficient MoE-based or low-rank models in terms of accuracy and parameter reduction? Were any qualitative comparisons made?\", \"Has NanoMoE been tested in transfer learning contexts? Would it retain its efficiency and performance when adapted to new tasks?\", \"**Suggestions:**\", \"The theoretical foundation is strong, but more experiments are needed to assess NanoMoE's performance and complexity compared to other MoE and existing approaches. I suggest:\", \"Adding performance comparisons with some existing baselines.\", \"Extending the experiments to more layers beyond the embedding layer, ideally including FFN and attention layers for a thorough evaluation.\", \"Compared with other MoE frameworks, NanoMoE\\u2019s structure is similar and would benefit from these benchmarks.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces NanoMoE, a novel family of structured matrices that achieves superior flexibility compared to low-rank matrices with minimal increase in parameters or FLOPs. The paper theoretically proves that NanoMoE can have a significantly higher rank and is strictly more flexible than low-rank matrices for similar parameter counts. Some experiments confirm the improved performance of NanoMoE layers relative to low-rank layers.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"NanoMoE is a novel family of structured matrices with clear theoretical advantages over low-rank matrices in terms of expressiveness, especially in achieving higher ranks for the same parameter count.\", \"The paper rigorously proves the said advantages of NanoMoE\", \"Experiments support the theory.\"], \"weaknesses\": [\"Experiments are only done on toy problems such as dense matrix approximation and a small text classification dataset. Can the authors present experiments on tasks such as image classification on CIFAR-10 / ImageNet and language modeling (e.g., using the nanoGPT codebase)? Results on these benchmarks have been used in evaluating new structured matrices in recent works [1, 2].\", \"There are already many equally parameter-efficient structured matrices that have the advantage of being full-rank, such as the Kronecker product, Tensor-Train decomposition, and Monarch matrices [1]. There is no comparison with these alternatives.\", \"While more expressive than the usual low-rank matrix, I believe NanoMoE will require more memory to store the activations (intermediate tensors have size $K r$ rather than just $r$). Moreover, I suspect the tensor core utilization will be lower because the block diagonal matrices involve contraction with smaller ranges, resulting in worse wall clock times despite having a minimal increase in FLOPs. The authors did not discuss these potential limitations.\", \"The experiment section does not provide details about how the models were trained. For example, are the learning rates well-tuned? Prior work [2, 3] has shown that structured matrices require very different learning rates than those commonly used for dense layers, making a well-tuned learning rate important for a fair comparison.\", \"The paper presents the connection to MoE as a strength since it has been shown to be more compute-efficient for pre-training LLMs. But only sparse MoE models have demonstrated improved training efficiency and is what was used in referenced models such as Mixtral and Switch Transformer. The proposed NanoMoE, however, is not a sparse MoE model and is, therefore, unlikely to lead to similar benefits. The authors carefully discuss this distinction.\", \"Recent works have used structured matrices to build MoE in each linear layer, similar to what is proposed in this work. I suggest the authors to discuss these highly related works. [3, 4]\", \"[1] Dao et al. 2022. Monarch: Expressive Structured Matrices for Efficient and Accurate Training\", \"[2] Qiu et al. 2024. Compute Better Spent: Replacing Dense Layers with Structured Matrices\", \"[3] Potapczynski et al. 2024. Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices\", \"[4] Oldfield et al. 2024. Multilinear Mixture of Experts: Scalable Expert Specialization through Factorization\"], \"questions\": [\"Could you elaborate on the training details, such as the learning rate and optimizer? Did you properly tune them and were the results sensitive to these choices?\", \"Does NanoMoE lead to higher activation memory due to larger intermediate tensors?\", \"How does NanoMoE compare with low-rank in terms of performance vs wall clock time (rather than parameter count or FLOPs) ?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
04RGjODVj3
From Rest to Action: Adaptive Weight Generation for Motor Imagery Classification from Resting-State EEG Using Hypernetworks
[ "Param Rajpura", "Yogesh Kumar Meena" ]
Existing EEG-based brain-computer interface (BCI) systems require long calibration sessions from the intended users to train the models, limiting their use in real-world applications. Additionally, despite containing user-specific information and features correlating with BCI performance of a user, resting-state EEG data is underutilized, especially in motor imagery decoding tasks. To address the challenge of within and across-user generalisation, we propose a novel architecture, HyperEEGNet, which integrates HyperNetworks (HNs) with the EEGNet architecture to adaptively generate weights for motor imagery classification based on resting-state data. Our approach performs similarly in a Leave-Subject-Out scenario using a dataset with 9 participants, compared to the baseline EEGNet. When the dataset size is scaled, with 33 participants' datasets, the model demonstrates its generalisation capabilities using the information from resting state EEG data, particularly when faced with unseen subjects. Our model can learn robust representations in both cross-session and cross-user scenarios, opening a novel premise to leverage the resting state data for downstream tasks like motor imagery classification. The findings also demonstrate that such models with smaller footprints reduce memory and storage requirements for edge computing. The approach opens up avenues for faster user calibration and better feasibility of edge computing, a favourable combination to push forward the efforts to bring BCIs to real-world applications.
[ "Brain-Computer Interfaces (BCIs)", "Motor Imagery", "HyperNetworks", "Data driven learning", "Adaptive weights" ]
Reject
https://openreview.net/pdf?id=04RGjODVj3
https://openreview.net/forum?id=04RGjODVj3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zIyEN0RNT6", "xj1DB3hEzo", "xFdrXXC96H", "vuP8XuNuMi", "uDliX7o8EQ", "knJiHOEz4m", "fjsC26xamv", "dcjL2PncDd", "aIeW8DIAQ5", "aFaYmIlILc", "Y9t1yNMbIS", "XxMk15wqut", "Wp71LhxiSp", "O9AsFymRaQ", "Nwz8jqfSht", "NMMJ6tJvnt", "ETJ8dOgch4", "7lES7mq0zq", "6n2n1UW9c6" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_review", "meta_review", "decision", "official_comment", "official_review" ], "note_created": [ 1732794790766, 1733158035973, 1732795016343, 1732795003766, 1730396501577, 1733128554431, 1732795260242, 1733128577188, 1732794804303, 1733128455338, 1730690133182, 1732795270118, 1732795217505, 1733128506756, 1730700236901, 1734399665471, 1737524298648, 1732794183874, 1730243876620 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Reviewer_CxNP" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Reviewer_Fidh" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Reviewer_CxNP" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Reviewer_gs65" ], [ "ICLR.cc/2025/Conference/Submission14090/Area_Chair_uBa4" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission14090/Authors" ], [ "ICLR.cc/2025/Conference/Submission14090/Reviewer_qyVC" ] ], "structured_content_str": [ "{\"comment\": \"We thank the reviewer for their insightful comments and suggestions. We appreciate their efforts in making this work more meaningful and robust. Based on the reviews, we have updated the Appendix Section in the original submission and the updated document can be viewed in the submission.\\n\\nWe thank the reviewer for acknowledging the originality and novelty in using resting state data to test the generalisation capabilities of the hypernetwork architecture.\\n\\nFollowing are the responses to the reviewer\\u2019s comments:\\n\\n_Claims on the strength of HyperNet + EEGNet would be improved through using a more comprehensive evaluation on the Dreyer et al. dataset. Leave-N-subjects-out train-test split should be done where around a quarter of the subjects are used as test subjects each time._\\n\\nWe perform the Leave-N-subject tests on Dreyer et al. dataset with N = 8,16,32. The results and the method are described in the Appendix section of the revised submission.\\n\\n_The HyperNet + EEGNet approach does not seem to work for the BCI IV IIa dataset (one of the two datasets tested). The authors mention that it is evidence that the method does not seem to work unless with a larger dataset. It would be better if there were another dataset that can be tested to show that the HyperNet + EEGNet approach does indeed improve classification given more than 9 subjects._\\n\\nWe agree with the reviewers suggestion and evaluate the approach on BCI IV IIb dataset with 9 participants. The results are detailed in the Appendix section. EEGNet outperforms the HyperEEGNet significantly. We also consider that since this dataset consists of just 3 channels, resting state data might not reflect the whole brain connectivity of an individual. However, we leave the interpretation for future work, including the neurophysiological basis of model interpretability.\\n\\n_Alternatively, the Dreyer et al. dataset could be evaluated while varying the number of subjects for training, e.g. 8, 16, 24, 32, etc to see if the trend of improving performance given more subjects occurs._\\n\\nAgreeing with the reviewer's suggestion, we perform the leave-N-subject out where the number of training subjects changes as we leave more subjects out for the test set. The results and methodology are described in the Appendix section. \\n\\n_It would be better if the term \\\"epoch\\\" is better clarified when used._\\n\\nWe thank the reviewer for pointing out the confusing terminology. We rewrite the statement as follows and also modify other instances across the document to avoid the confusion.\\n\\nMotor imagery activity data is extracted from a predefined time window (based on the experimental paradigm) in the raw data to perform the binary class classification with a forward pass on EEGNet with the generated weights from HyperNet. \\u2022 Cross entropy loss is accumulated for a batch of 50 epochs, and backpropagation is performed only on HyperNet parameters. Adam optimiser with learning rate 1e-4 is used\\n\\n_\\\"For the dataset from Dreyer et al. (2023), the \\u201dacquisition runs\\u201d from 33 participants are used for training and stratified 5-fold cross-validation is used to select the best model.\\\"\\nWhat variables are changed to select the best model? Is it the model architecture? Are hyperparameters tuned at all?_\\n\\nWe tuned the architecture by changing the width of the two hidden layers of the hypernetwork. Hyperparameters like the learning rate for hypernet, number of epochs, and dropout probability were tuned based on the cross-validation performance. \\n\\n_Although this passage is from the \\\"2.4.1 Cross-Session Condition\\\", it seems to imply that the train-test split is not split across sessions:\\n\\\"For the BCI IV IIa dataset, the data from all nine participants is divided into five folds with stratified cross-validation; each fold in the iteration is considered as a test set while the other set is split with an 80-20 ratio to choose the best-performing model on the validation set.\\\"\\nThe original work for the BCI competition seems to imply that there are two sessions for each subject. Is there a reason that evaluation across sessions does not seem to be done in the current work?_\\n\\nThere is no reason precisely not to follow the evaluation across sessions. We used a MOABB instance that loads both sessions by default as a dataset. We used them as a complete dataset to evaluate the results. However, we agree with the reviewer\\u2019s suggestion to standardise the results for comparison. The results for the across sessions are evaluated and reported in the Appendix.\"}", "{\"comment\": \"Hello, I appreciate that the answers to the questions were clear and that additional evaluations were made to address some of the high variability in results. However, primarily due to the lack of more results that show HyperEEGNet performs better than EEGNet in more scenarios or cases, my score remains the same.\"}", "{\"comment\": \"_What specific task-related information do you think should be included to optimize the input frequency and enhance model performance? How might this affect the model's practical deployment?_\\n\\nWe understand that the reviewer has referred to using task-related information from resting state EEG for hypernetwork input to optimise performance further. However, we request the reviewer to clarify the context of the term input frequency.\\n\\nUsing source-level information, i.e., connectivity across specific brain regions, instead of using sensor-level details, can be more useful and interpretable. \\nMoreover, previous works have identified a correlation with band power in the gamma band (55-85 Hz), which this work has not explored. We understand that choosing specific features from the resting state data doesn't impact the intended practical deployment as far as the paradigm includes recording resting state data.\\n\\n_Why wasn't the optimization of resting-state EEG data representations, especially concerning brain connectivity, explored? What additional features do you believe are important for downstream tasks that current measures do not capture?_\\n\\nWe assumed that the preliminary analysis and the feasibility can be validated using whole-brain connectivity based on the number of channels from a particular dataset. We also used all frequency bands relevant to the motor imagery paradigm (mu and beta bands 8-32 Hz). However, one of the previous works has cited the use of gamma band as the predictor of the BCI performance however the correlation was not as strong. \\nWe understand that the current work can validate the hypernetwork architecture by learning user-specific representations. Future work can explore optimisations and model interpretability by focusing on the neurophysiological perspectives.\\n\\n\\n_What criteria will you use to evaluate the efficacy of HyperEEGNet in comparison to other transfer learning methods? Are there particular metrics or datasets that you consider essential for this assessment?_\\n\\nWe thank the reviewers for highlighting the necessity of standardising such benchmarks in EEG classification tasks. We understand that Leave-one-subject-out and Leave-N-subject-out strategies are the best evaluation techniques across datasets. We follow those strategies to evaluate and compare the current performances. Most of the proposed techniques in transfer learning using data alignment can be combined with the current approach to validate efficacy.\\n\\nMost of the work in transfer learning focuses, on a few shot analyses. We discuss that perspective in section 4.3 Page 6, 310-315 and consider the limitations of the current work that can be explored in the future. \\nBased on the reviews, we have also listed the current state of the art using Leave-One-Subject-Out for multiple datasets. We also evaluate the Dreyer et al. dataset using Leave-N-Subjects-Out to make our experiments more robust when choosing subjects for the test set. We understand that our current approach validates the most challenging aspect of transfer learning, i.e. zero shot analysis.\\n\\nOur current work, after updated analysis in the Appendix section, includes 3 standard datasets in the motor imagery domain that include the variability in the number of channels, number of participants and available hardware. More datasets with a larger number of participants will be beneficial for such evaluation. Moreover, the approach can have EEG datasets from other paradigms, like inner speech recognition, that are prone to cross-subject variability.\"}", "{\"comment\": \"We thank the reviewer for sharing their suggestions and appreciate their acknowledgement of the strengths of our work. Based on the reviews, we have updated the Appendix Section in the original submission, and the updated document can be viewed in the submission.\\n\\nFollowing are the responses to the reviewer\\u2019s comments:\\n\\n_The initial evaluations rely on a relatively small dataset comprising just nine participants, which may not adequately reflect the variability found in larger populations. This raises questions about the generalizability of the findings without access to more extensive and diverse datasets._\", \"we_provide_evaluation_using_two_standard_eeg_datasets_used_in_the_benchmarks_and_evaluations\": \"BCI IV IIa with 9 participants and Dreyer et al. with 42 participants. As reviewers suggested, we have also added evaluation to validate our approach with leave N out to simulate different sizes of datasets. The results are summarised and included in the Appendix section of the revised submission.\\n\\n\\n_Additionally, while using resting-state EEG data is a novel approach, the model's performance may be affected if the quality or relevance of this data varies among different users or sessions._\\n\\nThe core idea of our work is to include resting state data and extract consistent and unique features for each participant. This approach is motivated by the previous findings showing a correlation between resting state markers and BCI performance. The model can adapt to the across-user variability and generalise using such features. \\nWe agree with the concern of quality across sessions for the same users. Citing this in our original submission, we have included resting state data preceding each trial to accommodate and build a robust model for such variations in the data.\\n\\n_Furthermore, incorporating HyperNetworks adds a layer of complexity to the training and tuning process, potentially necessitating greater computational resources and specialized knowledge for effective implementation._\\n\\nWe agree with the reviewer's concern about complexity and implementation; however, the generalisability of BCIs is more rewarding in terms of practical out-of-the-lab applications in our understanding against the cost of longer training time and minor increment in inference time. Moreover, using optimised hypernetworks, memory footprint can be reduced considerably since they reduce the storage requirements for the model weights.\\n\\n_Lastly, like many deep learning models, HyperEEGNet may have limitations in interpretability, making it difficult to ascertain how specific features impact its classification decisions._\\n\\nWe agree with the reviewer\\u2019s concern and interpretability and raised our concern in the original submission: Section 4.2, Page number 6 295-308. While the interpretation of weights generated using hypernets is not compared, EEGNet and the use of resting-state data have been thoroughly interpreted, and their neurophysiological basis is validated. We list a few references that are cited in the original submission that support the work: \\n\\nThis work interprets the weights learnt by EEGNet across different paradigms using EEG and validates against the known markers in neurophysiology.\\n\\nVernon J Lawhern, Amelia J Solon, Nicholas R Waytowich, Stephen M Gordon, Chou P Hung, and\\nBrent J Lance. Eegnet: a compact convolutional neural network for eeg-based brain\\u2013computer\\ninterfaces. Journal of neural engineering, 15(5):056013, 2018.\", \"following_works_have_validated_the_correlation_between_eeg_resting_state_markers_and_bci_performance_on_motor_imagery\": \"Eidan Tzdaka, Camille Benaroch, Camille Jeunet, and Fabien Lotte. Assessing the relevance of\\nneurophysiological patterns to predict motor imagery-based bci users\\u2019 performance. In 2020\\nIEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2490\\u20132495, 2020. doi: 10.1109/SMC42975.2020.9283307.\\n\\nDavid Trocellier, Bernard N\\u2019Kaoua, and Fabien Lotte. Validating neurophysiological predictors of\\nbci performance on a large open source dataset. In 9th Graz Brain-Computer Interface Conference 2024-GBCIC2024, 2024.\\n\\nBenjamin Blankertz, Claudia Sannelli, Sebastian Halder, Eva M Hammer, Andrea K\\u00a8ubler, Klaus-Robert M\\u00a8uller, Gabriel Curio, and Thorsten Dickhaus. Neurophysiological predictor of smr-based bci performance. Neuroimage, 51(4):1303\\u20131309, 2010.\"}", "{\"summary\": \"The authors introduce a new architecture called HyperEEGNet aimed at enhancing EEG-based brain-computer interface (BCI) systems. This innovation addresses issues related to lengthy calibration sessions and the limited use of resting-state EEG data in motor imagery decoding tasks. By combining HyperNetworks with the EEGNet framework, HyperEEGNet adaptively generates weights for motor imagery classification based on resting-state data. In Leave-Subject-Out scenarios using a dataset of nine participants, its performance is comparable to the baseline EEGNet. However, when applied to larger datasets with 33 participants, HyperEEGNet shows improved generalization capabilities, effectively utilizing resting-state EEG information to manage unseen subjects. The model provides strong representations in both cross-session and cross-user contexts, underscoring the value of resting-state data for tasks such as motor imagery classification. Additionally, the results indicate that HyperEEGNet has a smaller memory and storage footprint, making it well-suited for edge computing. This approach offers faster user calibration and enhances the practicality of real-world BCI applications, representing a significant advancement in the field.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"The model exhibits robust generalization capabilities, performing effectively in both Leave-Subject-Out scenarios and with larger datasets, demonstrating its ability to handle unseen subjects. Additionally, this approach promises reduced calibration time, which is essential for real-world BCI applications, thereby enhancing user-friendliness and practicality.\", \"weaknesses\": \"The initial evaluations rely on a relatively small dataset comprising just nine participants, which may not adequately reflect the variability found in larger populations. This raises questions about the generalizability of the findings without access to more extensive and diverse datasets. Additionally, while the use of resting-state EEG data is a novel approach, the model's performance may be affected if the quality or relevance of this data varies among different users or sessions. Furthermore, incorporating HyperNetworks adds a layer of complexity to the training and tuning process, potentially necessitating greater computational resources and specialized knowledge for effective implementation. Lastly, like many deep learning models, HyperEEGNet may have limitations in interpretability, making it difficult to ascertain how specific features impact its classification decisions.\", \"questions\": \"What specific task-related information do you think should be included to optimize the input frequency and enhance model performance? How might this affect the model's practical deployment?\\n\\nWhy wasn't the optimization of resting-state EEG data representations, especially concerning brain connectivity, explored? What additional features do you believe are important for downstream tasks that current measures do not capture?\\n\\nWhat criteria will you use to evaluate the efficacy of HyperEEGNet in comparison to other transfer learning methods? Are there particular metrics or datasets that you consider essential for this assessment?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewer Fidh\\n\\nWe kindly follow up on your feedback on our manuscript since today is the last discussion day.\\n\\nIn our earlier responses, we believe we have addressed your concerns comprehensively. We are eager to know if there are any additional suggestions or specific points we could consider to enhance our manuscript further.\\n\\nWe sincerely hope you might provide us with further insights that could guide us in strengthening our work.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"_What specific strategies were implemented to mitigate overfitting during training, especially given the observed risks at larger epoch sizes? How do you plan to validate the model's performance in cross-user scenarios beyond the training dataset?_\\n\\nDropout and Weight decay were implemented to mitigate overfitting. Since the best model was chosen based on validation accuracy, the results on the test set were robust against overfitting even if the epoch sizes were larger.\\nCross-dataset evaluation can be used to validate performance beyond the current dataset used for training. However, the major challenge is the varying number of channels. Choosing a subset of identical channels across datasets and evaluating the performance could be a valuable experiment for the future. \\n\\n_Could you elaborate on the implications of a steep learning curve and rapid convergence in just 50 epochs? What does this suggest about the model's capacity to capture complex patterns in the data?_\\n\\nWe observed that the HyperEEGNet converged with fewer epochs (<200), while EEGNet required 400-500 epochs on the Dreyer et al. dataset. While the observation is interesting, to comment on the implications or understand the learning mechanisms, the next step is to interpret and compare the EEGNet weights/activations generated using Hypernetworks with resting-state data against EEGNet with only activity data. \\n\\n_While the focus on two-class motor imagery classification is noted, what are the plans for extending this model to accommodate multiple classes or different downstream tasks? How do you envision addressing potential challenges in this expansion?_\\n\\nAfter updated analysis in the Appendix section, our current work includes 3 standard datasets in the motor imagery domain that include the variability in the number of channels, number of participants and available hardware. Datasets with more participants and activity classes will benefit such evaluation. Moreover, the approach can have EEG datasets from other paradigms, like inner speech recognition, that are prone to cross-subject variability. However, validating the interpretability is a potential challenge while expanding to other paradigms. Addressing the neurophysiological interpretation of generated weights and activations using HyperEEGNet is an essential next step.\\n\\n_How do you explain the performance variations among participants, particularly the discrepancy in accuracy for Participant ID 3? What insights can be gained from comparing the weights generated by the HyperNet with those from an EEGNet trained directly on activity data?_\\n\\nReferring to the BCI IV IIa dataset, we observed that EEGNet consistently performed better than HyperEEGNet. While the observation is contradictory, to comment on the implications or understand the learning mechanisms, the next step is to interpret and compare the EEGNet weights/activations generated using Hypernetworks with resting-state data against EEGNet with only activity data. We also tried using data from the 2nd session, where the performance of HyperEEGNet was not at par with EEGNet.\\n\\n\\n_Why was the optimization of resting-state EEG data representations not explored, particularly regarding brain connectivity? What additional features do you think might be important for downstream tasks that are not captured by current measures?_\\n\\nWe assumed that the preliminary analysis and the feasibility can be validated using whole-brain connectivity based on the number of channels from a particular dataset. We also used all frequency bands relevant to the motor imagery paradigm (mu and beta bands 8-32 Hz). However, one of the previous works has cited the use of gamma band as the predictor of the BCI performance however the correlation was not as strong. \\nWe understand that the current work can validate the hypernetwork architecture by learning user-specific representations. Future work can explore optimisations and model interpretability by focusing on the neurophysiological perspectives.\"}", "{\"comment\": \"Dear Reviewer qyVC\\n\\nWe kindly follow up on your feedback on our manuscript since today is the last discussion day.\\n\\nIn our earlier responses, we believe we have addressed your concerns comprehensively. We are eager to know if there are any additional suggestions or specific points we could consider to enhance our manuscript further.\\n\\nWe sincerely hope you might provide us with further insights that could guide us in strengthening our work.\\n\\nThank you for your time and consideration.\"}", "{\"comment\": \"_In Table 1, EEGNet without the HyperNet has 4 to 6 times the standard deviation as EEGNet with the HyperNet. Is there an explanation for this, especially when compared to how the ratio of the standard deviation seems to be much closer to 1 for Tables 2 and 3? If the same test in Table 1 is run with multiple seeds and using different subjects (not just the last 9 subjects) as the test set, would we still see such high variation across multiple folds for EEGNet without the HyperNet?_\\n\\nWe agree with the reviewer\\u2019s concern and used different seeds to perform a similar analysis. The results showed a high deviation for a particular seed value (42). Therefore, we report the analysis with Leave N out analysis instead.\\n\\n_It seems unclear when the resting state would occur for a practical control interface during deployment, which is the input used in the HyperNet. Would a separate resting-state classifier have to be used? Or would some other heuristic to determine resting state be sufficient?_\\n\\nFor practical purposes, control interfaces often include visual stimuli/feedback. BCIs may rely on time-locked activity guided by such visual stimuli, where the participant is asked to initiate a movement after a specific rest period. \\nOn the other hand, BCIs can be event-locked i.e. when a particular event is triggered, the classification of motor imagery activity starts. A classifier can then detect the resting state before the initiation of motor imagery classification. \\nMoreover, an exciting direction would be to use resting-state data recorded once for each participant during the calibration phase. If the approach is successful during deployment, one may not need the resting state data every time before the motor imagery activity is performed.\\n\\n\\n_What is the current state of the art regarding the performance of these two datasets for the metrics you evaluated?_\\n\\nSOTA on Dreyer et al. using LOSO:\\n\\n*Wang et al. 2024*\\t(Use 85 participants from Dataset A and B)\\t\\t \\t\\t \\t\\t\\n\\n75.25%\\n\\n\\n*Wimpff et al 2024* (Use 78 participants, using online mode with 1s time windows) \\t\\t\\n\\n69.29 \\u00b1 13.70%\\n\\t\\t\\n\\n* Ouahidi et al 2024 *(Details on inclusion/exclusion or time windows are not clarified)\\t\\t\\t\\t\\t\\t\\t\\n\\n89.3 (Offline)\\n\\t\\t\\t\\t\\t\\t\\t\\t\\t\\n77.5 (Online)\\n\\n\\t\\t\\t\\t\\t\\t\\t\\t\\t\\n\\n\\nWang, Yihan, et al. \\\"TFTL: A Task-Free Transfer Learning Strategy for EEG-based Cross-Subject & Cross-Dataset Motor Imagery BCI.\\\" IEEE Transactions on Biomedical Engineering (2024).\\n\\nWimpff, Martin, et al. \\\"Towards calibration-free online EEG motor imagery decoding using Deep Learning.\\\" ESANN, 2024.\\n\\nEl Ouahidi, Yassine, et al. \\\"Unsupervised Adaptive Deep Learning Method For BCI Motor Imagery Decoding.\\\" 2024 32nd European Signal Processing Conference (EUSIPCO). IEEE, 2024.\", \"sota_on_bci_iv_iia_using_loso\": \"*MI-DAGSC by Zhang el al. (2023)*\\t\\t\\t\\t\\t79.63 \\u00b1 12.27\\n(Train on session 1 and test on session 2)\\n\\nZhang, Dongxue, et al. \\\"MI-DAGSC: A domain adaptation approach incorporating comprehensive information from MI-EEG signals.\\\" Neural Networks 167 (2023): 183-198.\\n\\nAll the methods evaluated on both datasets using the LOSO strategy use domain adaptation techniques by aligning data from the target subjects. For better performance on datasets, such approaches can be combined with a complex architecture and the hypernet approach proposed here.\"}", "{\"comment\": \"Dear Reviewer gs65\\n\\nWe kindly follow up on your feedback on our manuscript since today is the last discussion day.\\n\\nIn our earlier responses, we believe we have addressed your concerns comprehensively. We are eager to know if there are any additional suggestions or specific points we could consider to enhance our manuscript further.\\n\\nWe sincerely hope you might provide us with further insights that could guide us in strengthening our work.\\n\\nThank you for your time and consideration.\"}", "{\"summary\": \"The paper aims to show the benefits of using a HyperNet architecture to improve the generalization capabilities of EEGNet for generalization given a large dataset. The authors also use data from the resting state before a trial as a novel input for motor imagery classification.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The paper is original in that they apply a HyperNet architecture to improve the generalization capabilities of an EEG motor imagery classifier. The paper evaluates inter-subject and inter-session performance, which are both important metrics for deployment of a BCI. The authors are fairly clear in how experiments are done, although I had some questions about intersession evaluation for the BCI IV IIa dataset. The work is significant in that a new method is evaluated on EEG data and shows strong generalization performance in a dataset with 42 subjects.\", \"weaknesses\": \"1. Claims on the strength of HyperNet + EEGNet would be improved through using a more comprehensive evaluation on the Dreyer et al. dataset. Leave-N-subjects-out train-test split should be done where around a quarter of the subjects are used as test subjects each time.\\n2. The HyperNet + EEGNet approach does not seem to work for the BCI IV IIa dataset (one of the two datasets tested). The authors mention that it is evidence that the method does not seem to work unless with a larger dataset. It would be better if there were another dataset that can be tested to show that the HyperNet + EEGNet approach does indeed improve classification given more than 9 subjects. Alternatively, the Dreyer et al. dataset could be evaluated while varying the number of subjects for training, e.g. 8, 16, 24, 32, etc to see if the trend of improving performance given more subjects occurs.\", \"questions\": \"1. The term \\\"epoch\\\" seems to be overloaded since they are common terms used in EEG and in machine learning but mean different things. I think it becomes unclear which meaning you use in the paper sometimes, for example here is the term \\\"epoch\\\" used in close proximity, although the former seems to mean a window of data and the latter seems to mean the number of times the HyperNet is trained on all batches:\\n>\\u2022 Motor imagery activity data in the form of an epoch is used to perform the binary class\\nclassification with a forward pass on EEGNet with the generated weights from HyperNet.\\n\\u2022 Cross entropy loss is accumulated for a batch of 50 epochs, and backpropagation is performed only on HyperNet parameters. Adam optimiser with learning rate 1e-4 is used.\\\" \\n\\nIt would be better if the term \\\"epoch\\\" is better clarified when used. \\n\\n2. >\\\"For the dataset from Dreyer et al. (2023), the \\u201dacquisition runs\\u201d from 33 participants are used for training and stratified 5-fold cross-validation is used to select the best model.\\\" \\n\\nWhat variables are changed to select the best model? Is it the model architecture? Are hyperparameters tuned at all?\\n\\n3. Although this passage is from the \\\"2.4.1 Cross-Session Condition\\\", it seems to imply that the train-test split is not split across sessions: \\n>\\\"For the BCI IV IIa dataset, the data from all nine participants is divided into five folds with stratified cross-validation; each fold in the iteration is considered as a test set while the other set is split with an 80-20 ratio to choose the best-performing model on the validation set.\\\" \\n\\nThe original work for the BCI competition seems to imply that there are two sessions for each subject. Is there a reason that evaluation across sessions does not seem to be done in the current work?\\n\\n4. In Table 1, EEGNet without the HyperNet seems to have 4 to 6 times the standard deviation as EEGNet with the HyperNet. Is there an explanation for this, especially when compared to how the ratio of the standard deviation seems to be much closer to 1 for Tables 2 and 3? If the same test in Table 1 is run with multiple seeds and using different subjects (not just the last 9 subjects) as the test set, would we still see such high variation across multiple folds for EEGNet without the HyperNet? \\n\\n5. For a practical control interface during deployment, it seems unclear when the resting state would occur, which is the input used in the HyperNet. Would a separate resting state classifier have to be used? Or would some other heuristic to determine resting state be sufficient?\\n\\n6. What is the current state of the art in terms of performance for these two datasets for the metrics you evaluated?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"_What criteria will you use to compare the efficacy of HyperEEGNet against other transfer learning approaches? Are there specific metrics or datasets that you consider critical for this comparison?_\\n\\nWe thank the reviewers for highlighting the necessity of standardising such benchmarks in EEG classification tasks. We understand that Leave-one-subject-out and Leave-N-subject-out strategies are the best evaluation techniques across datasets. We follow those strategies to evaluate and compare the current performances. Most of the proposed techniques in transfer learning using data alignment can be combined with the current approach to validate the efficacy.\\n\\nMost of the work in transfer learning focuses, on a few shot analyses. We discuss that perspective in section 4.3 Page 6, 310-315 and consider the limitations of the current work that can be explored in the future. \\nBased on the reviews, we have also listed the current state of the art using Leave-One-Subject-Out for multiple datasets. We also evaluate the Dreyer et al. dataset using Leave-N-Subjects-Out to make our experiments more robust when choosing subjects for the test set. We understand that our current approach validates the most challenging aspect of transfer learning, i.e. zero shot analysis.\\n\\nOur current work after updated analysis in the Appendix section includes 3 standard datasets in the motor imagery domain that includes the variability in number of channels, number of participants and available hardware. More datasets with a larger number of participants will be beneficial for such evaluation. Moreover, the approach can have EEG datasets from other paradigms, like inner speech recognition, that are prone to cross-subject variability. \\n\\n_What specific task-related information do you believe should be incorporated to optimize the input frequency and further enhance model performance? How will this impact the model's practical deployment?_\\n\\nWe understand that the reviewer has referred to using task-related information from resting state EEG for hypernetwork input to optimise performance further. However, we request the reviewer to clarify the context of the term input frequency.\\n\\nUsing source-level information, i.e., connectivity across specific brain regions, instead of using sensor-level details, can be more useful and interpretable. \\nMoreover, previous works have identified a correlation with band power in the gamma band (55-85 Hz), which this work has not explored. We understand that choosing specific features from the resting state data doesn't impact the intended practical deployment as far as the paradigm includes recording resting state data.\"}", "{\"comment\": \"We thank the reviewer for sharing their suggestions and appreciate their acknowledgement of the strengths of our work. Based on the reviews, we have updated the Appendix Section in the original submission, and the updated document can be viewed in the submission.\\n\\nFollowing are the responses to the reviewer\\u2019s comments:\\n\\n_Limited Dataset Size: The initial evaluations involve a relatively small dataset with only nine participants, which may not fully capture the variability present in broader populations. The generalizability of the findings could be questioned without larger, more diverse datasets._\", \"we_provide_evaluation_using_two_standard_eeg_datasets_used_in_the_benchmarks_and_evaluations\": \"BCI IV IIa with 9 participants and Dreyer et al. with 42 participants. As reviewers suggested, we have also added three more evaluations to validate our approach. The results are summarised and included in the Appendix section.\\n\\n_Dependence on Resting-State Data: While leveraging resting-state EEG data is innovative, the model's effectiveness might be limited if the quality or relevance of the resting-state data varies across users or sessions._\\n\\nThe core idea of our work is to include resting state data and extract consistent and unique features for each participant. This approach is motivated by the previous findings showing a correlation between resting state markers and BCI performance. The model can adapt to the across-user variability and generalise using such features. \\nWe agree with the concern of quality across sessions for the same users. Citing this in our original submission, we have included resting state data preceding each trial to accommodate and build a robust model for such variations in the data.\\n\\n_Complexity of HyperNetworks: The integration of HyperNetworks may introduce additional complexity in model training and tuning, potentially requiring more computational resources and expertise to implement effectively._\\n\\nWe agree with the reviewer's concern about complexity and implementation; however, the generalisability of BCIs is more rewarding in terms of practical out-of-the-lab applications in our understanding against the cost of longer training time and minor increment in inference time. Moreover, using optimised hypernetworks, memory footprint can be reduced considerably since they reduce the storage requirements for the model weights.\\n\\n_Interpretability: As with many deep learning models, the interpretability of HyperEEGNet's decision-making process might be limited, making it challenging to understand how specific features influence classifications._\\n\\nWe agree with the reviewer\\u2019s concern and interpretability and raised our concern in the original submission: Section 4.2, Page number 6 295-308. While the interpretation of weights generated using Hypernetwork is not compared, EEGNet and the use of resting-state data have been thoroughly interpreted, and their neurophysiological basis is validated. We list a few references that are cited in the original submission that support the work: \\n\\nThis work interprets the weights learnt by EEGNet across different paradigms using EEG and validates against the known markers in neurophysiology.\\n\\nVernon J Lawhern, Amelia J Solon, Nicholas R Waytowich, Stephen M Gordon, Chou P Hung, and\\nBrent J Lance. Eegnet: a compact convolutional neural network for eeg-based brain\\u2013computer\\ninterfaces. Journal of neural engineering, 15(5):056013, 2018.\", \"following_works_have_validated_the_correlation_between_eeg_resting_state_markers_and_bci_performance_on_motor_imagery\": \"Eidan Tzdaka, Camille Benaroch, Camille Jeunet, and Fabien Lotte. Assessing the relevance of\\nneurophysiological patterns to predict motor imagery-based bci users\\u2019 performance. In 2020\\nIEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 2490\\u20132495, 2020. doi: 10.1109/SMC42975.2020.9283307.\\n\\nDavid Trocellier, Bernard N\\u2019Kaoua, and Fabien Lotte. Validating neurophysiological predictors of\\nbci performance on a large open source dataset. In 9th Graz Brain-Computer Interface Conference 2024-GBCIC2024, 2024.\\n\\nBenjamin Blankertz, Claudia Sannelli, Sebastian Halder, Eva M Hammer, Andrea K\\u00a8ubler, Klaus-Robert M\\u00a8uller, Gabriel Curio, and Thorsten Dickhaus. Neurophysiological predictor of smr-based bci performance. Neuroimage, 51(4):1303\\u20131309, 2010.\"}", "{\"comment\": \"Dear Reviewer CxNP\\n\\nWe kindly follow up on your feedback on our manuscript since today is the last discussion day.\\n\\nIn our earlier responses, we believe we have addressed your concerns comprehensively. We are eager to know if there are any additional suggestions or specific points we could consider to enhance our manuscript further.\\n\\nWe sincerely hope you might provide us with further insights that could guide us in strengthening our work.\\n\\nThank you for your time and consideration.\"}", "{\"summary\": \"In this paper, the authors proposed a HyperEEGNet architecture by combining the conventional HyperNetwork and EEGNet to adress cross-user variability for MI-BCI systems. The authors compared the performance of the proposed HyperEEGNet with that of competing EEGNet on various publicly available MI-EEG datasets in both cross-session and cross-user conditions.\", \"soundness\": \"1\", \"presentation\": \"1\", \"contribution\": \"1\", \"strengths\": \"This study try to address the important issue of cross-user variability and BCI illiteracy issues in the MI-EEG analysis by adopting the ability of HyperNetwork to adaptive weight generation to learn user-specific representations.\", \"weaknesses\": \"There is no substantial innovation in proposed method combining the conventional HyperNetworks and EEGNet.\\n\\nThe performance improvement of the proposed method over existing EEGNet has not been consistently demonstrated across multiple datasets. This is, the proposed model achieved improved performance on the Dreyer et al. dataset, while its performance degraded on the BCI Competition IV IIa dataset. Furthermore, there has been no meaningful discussion about these conflicting results.\\n\\nNo comparisons were conducted with existing state-of-the-art methods that have addressed the subject variability issue.\", \"questions\": \"The proposed model, which simply combines existing HyperNetwork and EEGNet, lacks substantial innovation. In addition, it is difficult to confirm the advantages of the proposed model from comparative experiment results as the performance improvement of the proposed method has not been consistently demonstrated across multiple datasets. This is, the proposed model achieved improved performance on the Dreyer et al. dataset in Table 1, while its performance degraded on the BCI Competition IV IIa dataset in Table 2. Furthermore, there has been no meaningful discussion about these conflicting results.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"1\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper presents a novel architecture, HyperEEGNet, which integrates HyperNetworks (HNs) with the EEGNet architecture, to improve EEG-based BCI systems, addressing the limitations of long calibration sessions and the underutilization of resting-state EEG data in motor imagery decoding tasks. Tacking variability across subjects is an important issue in BCI as well. The paper demonstrates that the proposed model has domain generalization capabilities. There are a few concerns raised by reviewers. The main criticism is in limited dataset size and performance evaluation. It will be better to use more diverse larger datasets for performance evaluation. Since there are a lot of work on handling subject variability in BCI, it will be better to include some comparisons with other approaches. Therefore, the paper is not recommended for acceptance in its current form. I hope authors found the review comments informative and can improve their paper by addressing these carefully in future submissions.\", \"additional_comments_on_reviewer_discussion\": \"While the authors made efforts for rebuttal, there was no change during the discussion period. All of reviewers stood by their original decisions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"We thank the reviewer for sharing their opinions and suggestions.\\n\\nWe also thank the reviewer for acknowledging the motivation behind the work addressing cross-user variability and BCI illiteracy in the motor imagery domain. \\nHowever, we would like to draw the reviewer\\u2019s attention towards the approach of using resting state data as a means to learn user-specific representations. Previous literature (discussed in Introduction section on Page 2, 59-64) cites specific markers in resting state data, which highly correlates with BCI illiteracy and indicates the relevant user-specific information in resting state data when performing motor imagery. These works motivate our work specifically to use resting state data and not just the motor imagery data recorded during the experiments. \\nBased on the reviews, we have updated the Appendix Section in the original submission and the updated document can be viewed in the submission.\\n\\nFollowing are the point-wise responses to the reviewer\\u2019s comments:\\n\\n_There is no substantial innovation in proposed method combining the conventional HyperNetworks and EEGNet._\\n\\nAs far as we know, this work is the first attempt to include resting state data to learn data-driven representations for motor imagery tasks. Previous works have used markers from resting state to predict the extent of BCI illiteracy. \\nIt would be helpful if the reviewer could direct us to relevant sources they found similar to the proposed method. We agree that the concepts of Hypernetworks and EEGNet are not novel, but their application in the current context by learning EEGNet weights for the downstream task of motor imagery using resting state is unexplored and novel. \\n\\n\\n_The performance improvement of the proposed method over existing EEGNet has not been consistently demonstrated across multiple datasets. This is, the proposed model achieved improved performance on the Dreyer et al. dataset, while its performance degraded on the BCI Competition IV IIa dataset._\\n\\n\\nWe acknowledge the reviewer\\u2019s concern. The motivation here was to demonstrate the current approach's relevance on two different dataset sizes, where BCI competition dataset has fewer participants than Dreyer et al. \\nWe also perform Leave-N-Out evaluation with varying combinations on large datasets like Dreyer et al. to verify the effect of training dataset size. \\n\\n_Furthermore, there has been no meaningful discussion about these conflicting results._\\n\\nWe would like to draw attention to section 4.1, page 6, 283-285 of the paper, where we discuss how comparison encourages to collection of larger datasets and the inherent assumption of the conflicting results being a smaller number of participants.\\nBased on the reviews, we perform the following additional evaluation on existing and several other MI datasets:\\n1. Use statistical tests to confirm significance.\\n2. Perform Leave N out with N values ranging from 8-32 to validate the effect of training dataset size.\\n\\n_No comparisons were conducted with existing state-of-the-art methods that have addressed the subject variability issue._\\n\\nSeveral architectures have been proposed to address subject variability issues, however the approaches have focused on transfer learning or few shot paradigms that use labelled data from target participants. \\n\\nThe following works have used a subject-independent / zero-shot / leave-one-subject-out strategy to test the effectiveness without training on labelled data from the target subject. However, the approaches and their performance depend on the architecture used for the dataset. While our aim is to test the effectiveness of using resting state information for downstream EEG classification. It is preferable to evaluate considering the baseline with the main architecture (EEGNet) for task classification. We choose EEGNet as it has been well interpreted and benchmarked across various paradigms in EEG domain.\\n\\nO. -Y. Kwon, M. -H. Lee, C. Guan and S. -W. Lee, \\\"Subject-Independent Brain\\u2013Computer Interfaces Based on Deep Convolutional Neural Networks,\\\" in IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 10, pp. 3839-3852, Oct. 2020, doi: 10.1109/TNNLS.2019.2946869.\\n\\nZhang, Kaishuo, et al. \\\"Adaptive transfer learning for EEG motor imagery classification with deep convolutional neural network.\\\" Neural Networks 136 (2021): 1-10.\\n\\nFollowing paper uses domain adaptation, and has evaluated their effectiveness on BCI Competition IVa and IVb datasets. However, domain adaptation/semi-supervised approaches are not mutually exclusive and may be integrated with our approach to optimize the performance.\\n\\n\\nZhang, Dongxue, et al. \\\"MI-DAGSC: A domain adaptation approach incorporating comprehensive information from MI-EEG signals.\\\" Neural Networks 167 (2023): 183-198.\"}", "{\"summary\": \"The authors propose a novel architecture, HyperEEGNet, to improve EEG-based brain-computer interface (BCI) systems, addressing the limitations of long calibration sessions and the underutilization of resting-state EEG data in motor imagery decoding tasks. By integrating HyperNetworks with the EEGNet architecture, HyperEEGNet adaptively generates weights for motor imagery classification based on resting-state data. In Leave-Subject-Out scenarios using a dataset with nine participants, the model performs comparably to the baseline EEGNet. However, when scaled to datasets with 33 participants, HyperEEGNet demonstrates enhanced generalization capabilities, effectively leveraging resting-state EEG information to handle unseen subjects. The model achieves robust representations in both cross-session and cross-user scenarios, highlighting the potential of resting-state data for downstream tasks like motor imagery classification. Furthermore, the findings indicate that HyperEEGNet's smaller footprint reduces memory and storage requirements, making it suitable for edge computing. This approach promises faster user calibration and improved feasibility for real-world BCI applications, advancing the field significantly.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": \"Robust Generalization: The model demonstrates strong generalization capabilities, performing well in both Leave-Subject-Out scenarios and with larger datasets, indicating its effectiveness in handling unseen subjects.\", \"reduced_calibration_time\": \"The approach promises faster user calibration, which is crucial for real-world BCI applications, making it more user-friendly.\", \"weaknesses\": \"Limited Dataset Size: The initial evaluations involve a relatively small dataset with only nine participants, which may not fully capture the variability present in broader populations. The generalizability of the findings could be questioned without larger, more diverse datasets.\", \"dependence_on_resting_state_data\": \"While leveraging resting-state EEG data is innovative, the model's effectiveness might be limited if the quality or relevance of the resting-state data varies across users or sessions.\", \"complexity_of_hypernetworks\": \"The integration of HyperNetworks may introduce additional complexity in model training and tuning, potentially requiring more computational resources and expertise to implement effectively.\", \"interpretability\": \"As with many deep learning models, the interpretability of HyperEEGNet's decision-making process might be limited, making it challenging to understand how specific features influence classifications.\", \"questions\": \"What specific strategies were implemented to mitigate overfitting during training, especially given the observed risks at larger epoch sizes? How do you plan to validate the model's performance in cross-user scenarios beyond the training dataset?\\n\\nCould you elaborate on the implications of a steep learning curve and rapid convergence in just 50 epochs? What does this suggest about the model's capacity to capture complex patterns in the data?\\n\\nWhile the focus on two-class motor imagery classification is noted, what are the plans for extending this model to accommodate multiple classes or different downstream tasks? How do you envision addressing potential challenges in this expansion?\\n\\n How do you explain the performance variations among participants, particularly the discrepancy in accuracy for Participant ID 3? What insights can be gained from comparing the weights generated by the HyperNet with those from an EEGNet trained directly on activity data?\\n\\nWhy was the optimization of resting-state EEG data representations not explored, particularly regarding brain connectivity? What additional features do you think might be important for downstream tasks that are not captured by current measures?\\n\\nWhat criteria will you use to compare the efficacy of HyperEEGNet against other transfer learning approaches? Are there specific metrics or datasets that you consider critical for this comparison?\\n\\nWhat specific task-related information do you believe should be incorporated to optimize the input frequency and further enhance model performance? How will this impact the model's practical deployment?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}" ] }
03u7pbpyeN
BEATS: Optimizing LLM Mathematical Capabilities with BackVerify and Adaptive Disambiguate based Efficient Tree Search
[ "Linzhuang Sun", "Hao Liang", "Jingxuan Wei", "Bihui Yu", "Conghui He", "Zenan Zhou", "Wentao Zhang" ]
Large Language Models (LLMs) have exhibited exceptional performance across a broad range of tasks and domains. However, they still encounter difficulties in solving mathematical problems due to the rigorous and logical nature of mathematics. Previous studies have employed techniques such as supervised fine-tuning (SFT), prompt engineering, and search-based methods to improve the mathematical problem-solving abilities of LLMs. Despite these efforts, their performance remains suboptimal and demands substantial computational resources. To address this issue, we propose a novel approach, BEATS, to enhance mathematical problem-solving abilities. Our method leverages newly designed prompts that guide the model to iteratively rewrite, advance by one step, and generate answers based on previous steps. Additionally, we introduce a new back-verification technique that uses LLMs to validate the correctness of the generated answers. Furthermore, we employ a pruning tree search to optimize search time while achieving state-of-the-art (SOTA) performance. Notably, our method improves Qwen2-7b-Instruct's score from 36.94 to 61.52 (outperforming GPT-4’s 42.5) on the MATH benchmark.
[ "Large Language Models", "Tree Search", "Back Verification" ]
https://openreview.net/pdf?id=03u7pbpyeN
https://openreview.net/forum?id=03u7pbpyeN
ICLR.cc/2025/Conference
2025
{ "note_id": [ "mkWweEKsMP", "ildClgZp2j", "EmIldATtsx", "45SlwMDmRg", "0Md90Alr6g" ], "note_type": [ "comment", "official_review", "official_review", "official_review", "official_review" ], "note_created": [ 1731458736847, 1730357724254, 1730287257817, 1730289122862, 1730571629414 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4202/Authors" ], [ "ICLR.cc/2025/Conference/Submission4202/Reviewer_ywsT" ], [ "ICLR.cc/2025/Conference/Submission4202/Reviewer_omp7" ], [ "ICLR.cc/2025/Conference/Submission4202/Reviewer_WZRU" ], [ "ICLR.cc/2025/Conference/Submission4202/Reviewer_jyaG" ] ], "structured_content_str": [ "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\"}", "{\"summary\": \"This work investigates both prompt-based and search-based methods to enhance the mathematical reasoning abilities of large language models. The authors improve traditional search-based methods by pruning the search tree using carefully crafted prompts. A disambiguation prompt clarifies the original problem, while two additional prompts guide reasoning steps and determine search termination. Different pruning strategies are tailored to each type of prompt. The authors also introduce a self-correction mechanism called back-verification, where LLMs validate answer candidates by concatenating them with the original problem. The method\\u2019s effectiveness is evaluated across 5 math reasoning benchmarks.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"1\", \"strengths\": \"1. The paper presents a novel approach that combines tree search with back-verification and adaptive disambiguation to enhance the mathematical reasoning capabilities of large language models (LLMs).\\n2. Ablation studies are conducted to assess the impact of key components in the proposed method, focusing on the contributions of the disambiguation and back-verification modules.\\n3. The pruning in the tree search effectively reduces the problem search space, improving computational efficiency.\", \"weaknesses\": \"1. The proposed approach lacks substantial novelty.\\n2. The selection of baselines for comparison in search-based methods is not sufficiently justified. Zhang et al. [1] use MCTS with LLaMA3 8B (which is also used in this paper) to enhance mathematical reasoning in LLMs, achieving 96.66% accuracy on GSM8K and 58.24% on MATH, which is significantly higher than the results of this approach.\\n3. Although an ablation study on the BackVerify component is included, comparisons with other verification methods are lacking. For instance, the ReST paper [2] evaluates the impact of different verifiers on performance, but similar evaluations are absent in this work.\\n4. While pruning tree search is a key contribution of the paper, there is no experimental analysis on the extent to which the pruning strategy reduces search time. Additionally, comparing the total inference time with other search-based methods is essential to substantiate the advantages of the pruning approach.\\n\\n**References:**\\n- [1] Zhang, D., Huang, X., Zhou, D., Li, Y., & Ouyang, W. (2024). *Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo Tree Self-refine with LLaMa-3 8B*. arXiv preprint arXiv:2406.07394.\\n- [2] Zhang, D., Zhoubian, S., Hu, Z., Yue, Y., Dong, Y., & Tang, J. (2024). *ReST-MCTS: LLM Self-Training via Process Reward Guided Tree Search*. arXiv preprint arXiv:2406.03816.\", \"questions\": \"1. How do authors verify that the disambiguation prompt effectively resolves ambiguous problem statements? Although the ablation study indicates that this prompt improves final performance, a more detailed analysis is needed. For instance, do all problems correctly solved without the disambiguation prompt remain correct when it is applied?\\n2. Which version of GPT-4 is used for evaluation? If the results are referenced from OpenAI papers or technical blogs, please provide the appropriate citations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper studies the mathematical reasoning problem in aspects of prompt engineering. The authors highlight the suboptimal prompts, high costs, and ineffective verification issues, and propose a tree-search-based prompt engineering method. The experiments show that the proposed method outperforms existing methods by a margin.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The challenges proposed by the authors are reasonable. These challenges can inspire future research. The proposed method combines techniques that successfully alleviate the problems.\", \"The experimental results are promising. The proposed method significantly\", \"improves the performance of each base model compared to the comparison\", \"methods.\", \"This paper is well-written and organized.\"], \"weaknesses\": \"- The novelty of this paper is somewhat limited. For example, the back verification has already been proposed in [1]. The heuristic pruning rules, e.g., Rule (3), are also common used in math reasoning. Tree-based searching methods [2] are not new either.\\n- The inference cost of each method should be reported. As the SFT and zero-shot methods usually require one inference, the proposed methods require multiple samplings, making the comparison unfair.\\n- The experimental results require deeper discussion. For example, the authors mention an issue with \\\"ambiguous problem statements\\\" and introduce a prompt engineering method to address it. However, there is insufficient explanation of how having the LLM rewrite the problem itself resolves this issue, and there is no comparison between the original and rewritten versions to demonstrate the effectiveness of the LLM. Additionally, if the LLM can rewrite the problem on its own, why can't it directly solve the problem?\\n\\n[1] Large Language Models are Better Reasoners with Self-Verification. EMNLP\\n(Findings) 2023: 2550-2575\\n\\n[2] Accessing GPT-4 level Mathematical Olympiad Solutions via Monte Carlo\", \"tree_self_refine_with_llama_3_8b\": \"A Technical Report.\", \"questions\": \"Please also refer to the weakness section.\\n1. The overall framework is based on prompt engineering, which strongly relies on the capability of LLM. Can the proposed method give such significant performance improvement when dealing with Olympiad math reasoning datasets, e.g., AIME, Olympiad?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents BEATS, a novel approach to improving the mathematical problem-solving abilities of large language models (LLMs). It introduces a method that combines enhanced prompting, tree search with pruning, and a back-verification technique. BEATS claims significant improvements, particularly with the Qwen2-7B model, outperforming benchmarks such as GPT-4 on the MATH dataset.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"BEATS uses a unique tree search strategy with pruning and back-verification to streamline reasoning paths and verify answers, improving accuracy and efficiency.\", \"Empirical results across multiple datasets (MATH, GSM8K, SVAMP, etc.) show notable improvement over existing methods.\", \"The inclusion of a question disambiguation component helps clarify ambiguous problems, potentially reducing error.\"], \"weaknesses\": [\"This component, though effective, adds additional steps to the inference phase, potentially affecting efficiency in real-time applications.\", \"The paper could benefit from a more detailed discussion of the limitations of the proposed methods and potential areas for future work, such as the impact of training data on performance.\", \"Further discussion on how the pruning limits affect accuracy vs. computation trade-off would add valuable insight.\"], \"questions\": [\"How does the diversity and quality of the training data influence the performance of BEATS, particularly in edge cases or complex problems?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper presents BEATS, a framework that enhances mathematical problem-solving in language models by introducing targeted prompting strategies that guide the model through a step-by-step approach to decompose complex problems. Furthermore, BEATS incorporates a tree search mechanism, enabling exploration of each decision step individually, which helps refine solutions iteratively. The experiments demonstrate a significant performance increase on standard benchmarks.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": \"The proposed method showcases two key features\\u2014*disambiguation* and *back-verification*\\u2014that notably enhance the model's reasoning process, as confirmed by the ablation study. *Disambiguation* helps clarify problem statements at each reasoning step, reducing the likelihood of misinterpretation, while *back-verification* provides a robust mechanism to cross-check each solution against previous steps. Together, these techniques improve benchmark performance by a substantial margin.\", \"weaknesses\": [\"The paper combines existing approaches, such as tree search and reflective reasoning techniques, but falls short of introducing transformative new methods. While effective, the design lacks substantial innovation in handling complex reasoning beyond prior approaches.\", \"A significant issue lies in the increased computational cost introduced by the extra steps, including disambiguation and back-verification. Although these steps improve accuracy, their contribution to computational overhead is not quantified, making it challenging to assess the overall efficiency.\", \"Despite mentioning computational challenges in the introduction, the paper lacks a thorough analysis of the actual cost implications. The pruning technique within tree search is minimalistic, relying on basic conditions to halt expansion rather than addressing cost at a fundamental level.\", \"Some areas in the paper, particularly Section 2.3, contain formatting issues, such as duplicated author names.\"], \"questions\": \"Could the authors provide more details on the computational trade-offs involved?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}" ] }
03OkC0LKDD
Adaptive Gradient Clipping for Robust Federated Learning
[ "Youssef Allouah", "Rachid Guerraoui", "Nirupam Gupta", "Ahmed Jellouli", "Geovani Rizk", "John Stephan" ]
Robust federated learning aims to maintain reliable performance despite the presence of adversarial or misbehaving workers. While state-of-the-art (SOTA) robust distributed gradient descent (Robust-DGD) methods were proven theoretically optimal, their empirical success has often relied on pre-aggregation gradient clipping. However, existing static clipping strategies yield inconsistent results: enhancing robustness against some attacks while being ineffective or even detrimental against others. To address this limitation, we propose a principled adaptive clipping strategy, Adaptive Robust Clipping (ARC), which dynamically adjusts clipping thresholds based on the input gradients. We prove that ARC not only preserves the theoretical robustness guarantees of SOTA Robust-DGD methods but also provably improves asymptotic convergence when the model is well-initialized. Extensive experiments on benchmark image classification tasks confirm these theoretical insights, demonstrating that ARC significantly enhances robustness, particularly in highly heterogeneous and adversarial settings.
[ "Federated learning", "robustness", "Byzantine resilience" ]
Accept (Spotlight)
https://openreview.net/pdf?id=03OkC0LKDD
https://openreview.net/forum?id=03OkC0LKDD
ICLR.cc/2025/Conference
2025
{ "note_id": [ "z0LhvNOTdN", "yP7ScSI8MJ", "wufhL9EoLg", "ryDOSDjblA", "rElphDCEZq", "oAZi66ciRc", "m6vekDj6VM", "lIeW4bd2I9", "jxudMmU7op", "gnMA0VquGi", "g36PxAVpQN", "bbdm9g2db5", "aF7V4zgYTR", "Zc0g6P0bzv", "TKnO6M1glb", "NmGXeT1zAY", "NHOJ5LqlA3", "LTj9UcZW51", "KB1hhgvXAJ", "IyloLRkfAq", "EQuP9uD8Fc", "EDd4eYRiFE", "B3SArAtP9m", "85P27qUzgz", "7NolxNIwEr", "4B8bmFtv9P", "2KXewfvMLp" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "decision", "meta_review", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1731751995594, 1732469573608, 1732526228248, 1729623635506, 1731973424430, 1732526170031, 1732014466639, 1732526561118, 1732464587809, 1732444769446, 1731751971698, 1732488060801, 1731752160364, 1732012496914, 1730723895167, 1732308657891, 1730577684721, 1732538164868, 1730494363564, 1731923374969, 1732525430969, 1737524032973, 1734405637117, 1732707660956, 1731751437818, 1732541063678, 1731751129555 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_KAQT" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_n1Gk" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_n1Gk" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_zvxn" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_Ap5T" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_KAQT" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_zvxn" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_KAQT" ], [ "ICLR.cc/2025/Conference/Submission10208/Reviewer_Ap5T" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10208/Area_Chair_tXob" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ], [ "ICLR.cc/2025/Conference/Submission10208/Authors" ] ], "structured_content_str": [ "{\"comment\": \"### On the Overhead of ARC\\nARC\\u2019s complexity is $\\\\mathcal{O}(nd + n\\\\log(n))$(see Appendix A), with \\n$n$ as the number of workers and $d$ as the model dimension.\\nWhile it is true that ARC incurs a linear dependence on \\n$d$, it is worth noting that this is unavoidable, as all robust aggregation methods must process gradients across model dimensions. Thus, ARC\\u2019s linear dependence on $d$ is a fundamental requirement shared across robust aggregations.\\nWhat is particularly critical to consider, however, is the dependence on \\n$n$, the number of workers, as this truly distinguishes between efficient and costly aggregation methods, especially in the context of scaling to large language models.\\nARC\\u2019s complexity in terms of \\n$n$ remains modest whereas more costly aggregation schemes like Nearest Neighbor Mixing (NNM), which is crucial in high heterogeneity, exhibit $\\\\mathcal{O}(dn^2)$ complexity. In distributed ML scenarios where \\n$n$ is large, this difference in $n$-scaling makes ARC a comparatively efficient choice.\\nHowever, we recognize the value of further efficiency improvements and agree that developing a more computationally efficient adaptive clipping mechanism that retains ARC\\u2019s resilience could be an interesting avenue for future research.\\nWe will make this clear in the paper and we thank the reviewer for pointing this out.\\n\\n### On ARC Clipping Aggressively\\nWe thank the reviewer for this interesting question. As explained above, the clipping threshold depends on the number of tolerated Byzantine workers $f$. By construction, therefore, the clipping applied by ARC depends on the threat under consideration. If the number of Byzantine workers is small, then ARC will clip only a small portion of the gradients; on the contrary, if the number of Byzantine workers $f$ is large, ARC clips more gradients to limit the impact of malicious vectors while ensuring that at least one honest gradient will remain unclipped (see Lemma B.1).\\n\\n### Loss Example where $B$ is Small\\nIndeed, there is no guarantee of convergence if $\\\\kappa B^2 \\\\geq 1$. In Figure 3 of *Robust Distributed Learning: Tight Error Bounds and Breakdown Point under Data Heterogeneity* (Allouah et al., page 9), the authors trained a logistic regression model on MNIST and showed that the associated error (taking into account $G$ and $B$) is small, while empirically guaranteeing that $\\\\kappa B^2 < 1$. In this case, the loss function used was the Negative Log Likelihood loss.\"}", "{\"comment\": \"I thank the authors for the detailed response. I have understood the meaningfulness of Theorem 5.2. Meanwhile, I would like to present the remaining concern below.\\n\\n**Remaining concern**: Although I appreciate the authors' explanation about Theorem 5.2, I am wondering why $f/n$ can be arbitrarily close to BP. Integer $n$ is decided by the problem. Thus, the difference between $f/n$ and BP should be at least $1/n$ for any specific problem with $n$ workers. Thus, $\\\\xi=\\\\frac{BP-f/n}{BP}>2/n$ when there are $n$ workers since $BP<1/2$. Therefore, $v\\\\geq\\\\xi\\\\cdot\\\\Phi(G,B,\\\\rho)>\\\\frac{2\\\\Phi(G,B,\\\\rho)}{n}$. Noticing that $\\\\Phi(G,B,\\\\rho)>640$ according to the definition (probably much larger than $640$). Therefore, the theorem shows the improvement of ARC only when $n$ is very large and $f/n$ is extremely close to BP.\\n\\nMeanwhile, I would like to clarify that the concern above is just to point out the limitation in application scope of Theorem 5.2. Overall, I am inclined to raise my rating after reading the authors' explanation, and I would like to hear the authors' response to the remaining concern above before finally deciding my rating.\"}", "{\"comment\": \"Thank you for your time and the constructive feedback that will improve the quality of the paper.\"}", "{\"summary\": \"The authors introduce a variation on the static clipping technique used to overcome byzantine attacks in federated learning. Their algorithm makes the process dynamic by adapting the clipping value to the input gradients themselves; this algorithm, called ARC, or Adaptive Robust Clipping, is proved to be robust: (f, 3k)-robust. More importantly, the authors prove that static clipping breaks the standard (f,k)-robustness, which highlights the shortcomings of the empirical results demonstrated in the papers highlighted in paragraph five of the Introduction. These reasons motivate the need for a dynamic approach to gradient clipping. ARC was paired with and tested using the following techniques: coordinate-wise trimmed mean, geometric median, multi-Krum, and nearest-neighbor mixing. Various simulations were ran by the authors to show the utility of ARC, these include: simulations on varying degrees of data heterogeneity, simulations on varying f (the number of malicious agents), simulations showing the improved breakdown point. All of these simulations show how ARC can provide robustness.\\n\\nA current problem point is that the authors perform simulations and demonstrate against not using gradient clipping. In paragraph 5 of the Introduction, the authors clearly state that static methods are a problem that their approach, ARC, solves. Then, the authors proceed to perform simulations and do not compare their results against static clipping, but compare against no clipping. It is known, and evidenced by the cited work, that not clipping is a problem that is overcome by using some form of clipping. Therefore, results compared against not clipping yields no additional information. While the authors have shown that ARC has obvious utility that could help overcome known issues, readers cannot determine the excess utility over using static clipping. While I believe the paper holds merit, as the empirical results show, I do not believe the paper can proceed without the authors running the experiments again and showing the results with static gradient clipping. The comparison between static and dynamic clipping is the fundamental point of the paper and not having a comparison of the two makes the paper unqualified to proceed. If the authors can show those results, so readers, such as myself, can see how much improvement is gained by using a dynamic choice for clipping, then I believe the paper will contain enough merit to be accepted and to receive a higher score.\\n\\nAs a final, syntactic, comment, I believe the authors should move the Related Works section to an earlier spot in the paper so readers can more easily understand the historical context and how the motivation for the novel work. This swap will increase flow of understanding for the reader who will have to exert less mental effort to juggle the chronological and intellectual pieces together.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"The paper introduces dynamic gradient clipping as an improvement to its static counterpart. More importantly it proves that the static approach is not robust, in accordance with the definition of robustness given in the paper. Furthermore, the authors prove the robustness of their approach and provide empirical evidence of the utility of their algorithm.\", \"weaknesses\": \"The major weakness of the paper, which is a critical one, is the complete lack of comparison of ARC versus static clipping. The authors run experiments against not using clipping; this is rendered moot by prior work and therefore is not a necessary point of comparison. The authors must go back and run the same experiments they ran with with static gradient clipping and plot that against their dynamic approach. Without doing this, it is not possible to determine the benefit their work over prior work.\", \"questions\": \"1) Why were no experiments ran using static clipping?\\n2) What is the reason for only using a network with 10 agents? Typically, networks with more agents that test for heterogeneity have a harder time than those with networks because there is a wider gap on intra and inter-class datasets.\\n3) What was the reason for selecting 17 agents for the simulations in the section \\\"Improved robustness on CIFAR-10\\\"? Can you expand to more agents?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Comment on \\\"Comparison with Static Clipping\\\"\\nI appreciate the comment from the authors regarding their extensive testing. I kindly remind the authors that reviewers are not under the obligation to review the Appendix. For this reason I gave the comment which I did. Considering a comparison with static clipping is the most important point in the paper, I strongly urge the authors to rearrange their writing to include the results of Appendix F.\\n\\nComment on \\\"Large Scale Computing\\\"\\nIn the paper, it is stated that ARC does not introduce significant overhead. How does that statement add up to the comment provided by the authors stating that they lack computational resources for an experiment with more than 17 workers? I believe it is important to add an appendix section and a comment in the paper itself referencing the computational power limitations of the framework or at least to make a statement on the machinery used to justify the limited resources.\"}", "{\"comment\": \"We thank Reviewer Ap5T for their thoughtful questions, which we address below.\\n\\n### **On the Assumption of Knowing $f$.**\\nFirst, we would like to emphasize that the assumption of knowing the number of Byzantine workers ($f$) is standard in the literature on Byzantine-robust ML. Numerous prior works rely on this assumption, including (Farhadkhani et al., 2022; Allouah et al., 2023a; Karimireddy et al., 2022; Yin et al., 2018; Karimireddy et al., 2021). In practice, $f$ is typically treated as the maximum number of Byzantine workers the system is expected to tolerate, and it is set by practitioners based on their desired level of robustness. We will explicitly clarify this point in the paper to ensure this context is fully transparent.\\n\\n### **On Practical Guidelines for Estimating $f$.**\\nIn practical scenarios, estimating $f$ can indeed be challenging, especially since Byzantine behavior is often unpredictable. In such cases, we recommend that practitioners adopt a conservative approach, setting $f$ based on the worst-case scenario they anticipate for their system. This ensures that the algorithm remains robust under the expected threat level. We will include these practical guidelines in the revised paper to provide actionable advice to practitioners.\\n\\n### **Impact of Overestimating or Underestimating $f$.**\\nIn our framework, $f$ represents an upper bound on the number of Byzantine workers that may exist in the system. Importantly, our theoretical guarantees remain valid as long as the actual number of Byzantine workers does not exceed $f$. If the true number of Byzantine workers is greater than $f$, the guarantees may no longer hold. This distinction will also be explicitly clarified in the revised version of the paper.\\n\\nThe impact of misestimating $f$ on the performance of ARC is discussed below. Note that similar observations also apply to other robust aggregation schemes that use $f$ in their design, such as NNM, coordinate-wise trimmed mean (CWTM), Multi-Krum (see the aforementioned papers).\\n\\n- **Overestimating $f$:** If $f$ is set higher than the actual number of Byzantine workers, ARC may unnecessary clip honest gradients, which could result in slower convergence.\\n- **Underestimating $f$:** If $f$ is underestimated (i.e., the actual number of Byzantine workers exceeds $f$), robustness guarantees may no longer hold, and the system could fail to prevent the influence of adversarial gradients. This is the more critical scenario, which underscores the importance of setting $f$ conservatively.\\n\\nWe will expand the discussion in the paper to address these points, highlighting both the practical considerations for estimating $f$ and the potential impacts of misestimation. \\n\\nOnce again, we thank the reviewer for raising these important points, which will help us further improve the clarity and practical value of the paper. We hope that this response addresses your concerns, and we welcome any additional feedback you may have.\"}", "{\"comment\": \"We thank Reviewer n1Gk for their prompt and constructive feedback.\\n\\n**Comparison with Static Clipping**\\nWe completely agree with the reviewer on the critical importance of comparing ARC with static clipping. To address this concern, we will summarize the extensive comparison to static clipping in Appendix F and move the key results to the main paper. This adjustment will ensure that readers can easily see that ARC not only outperforms unclipped strategies but also demonstrates clear advantages over static clipping methods. We believe this change will emphasize the core contributions of the paper.\\n\\n**Large-Scale Computing**\\nWe would like to point out to the reviewer that it is not ARC that constitutes the bottleneck in our CIFAR-10 experiments, but rather NNM. ARC\\u2019s computational complexity is $\\\\mathcal{O}(nd + n\\\\log(n))$, while NNM\\u2019s complexity is \\n$\\\\mathcal{O}(dn^2)$, making the latter significantly more resource-intensive, particularly as the number of workers scales up. To address this, we have included detailed runtime experiments in Appendix G.2, which explicitly illustrate the substantial overhead introduced by NNM compared to the comparatively much smaller computational cost associated with ARC. We encourage the reviewer to review this section for further clarification.\\n\\nAs suggested by the reviewer, we will include a discussion in the main paper on the runtime of ARC compared to NNM. Additionally, we will describe the computational resources and machinery used for our experiments for full transparency.\\n\\nLastly, we would like to share that we have initiated experiments on CIFAR-10 with 35 workers, effectively doubling the initial system size. While we aim to complete these experiments before the discussion period ends, we cannot guarantee their completion within this timeframe. Regardless, we plan to include these results in the final version of the paper to further strengthen our contributions.\"}", "{\"comment\": \"We sincerely thank Reviewer n1Gk for their time and dedication to improving the quality of the paper.\\n\\nAs the reviewer suggested, we were able to scale the experiments on CIFAR-10 and have included them in the revised version of the paper. We also intend to move these results, along with the comparison with static clipping, to the main body of the final version of the paper to ensure these key findings are highlighted appropriately.\\n\\nThank you once again for your valuable feedback, which has greatly contributed to strengthening the paper.\"}", "{\"comment\": \"Dear Reviewer KAQT,\\n\\nWe sincerely appreciate your detailed feedback and the points you raised regarding Theorem 5.2.\\n\\nIn our rebuttal, we have provided a detailed response addressing your concerns about the meaningfulness of Theorem 5.2. Specifically, we clarified how the parameter $\\\\xi$ plays a critical role in ensuring it, especially in adversarial regimes where $\\\\frac{f}{n}$ approaches the breakdown point. This explanation highlights how Theorem 5.2 demonstrates ARC\\u2019s strict improvement in such scenarios.\\n\\nWe would be grateful if you could review our response and engage in a discussion with us to address any remaining doubts or concerns. We are committed to ensuring that all aspects of the paper are clear and rigorous, and we welcome the opportunity to provide further clarifications if needed.\\n\\nThank you for your constructive review, and we look forward to hearing your thoughts during the discussion period.\"}", "{\"comment\": \"Thank you for your rebuttal. I will raise my score to 6.\"}", "{\"comment\": \"We thank Reviewer Ap5T for the comments, which we discuss below.\\n\\n### Small Static Clipping Threshold\\nWe appreciate the reviewer\\u2019s recommendation to examine the impact of larger static clipping thresholds. In fact, we already tested a broad range of clipping values in our experiments to provide a robust comparison between ARC and static clipping methods. Appendix F presents the results of this comparison, which spans several static clipping parameters (from $0.2$ to $20$) across different levels of heterogeneity, numbers of Byzantine workers, and Byzantine attacks. As our results indicate, no static clipping threshold provides consistent robustness across various heterogeneity levels and Byzantine attack scenarios. By contrast, ARC\\u2019s adaptive mechanism enables it to consistently adjust to these changing conditions, delivering stable performance. For further details, please refer to our response to Reviewer n1Gk.\\n\\nAdditionally, we highlight ARC\\u2019s adaptivity in Figure 19, where we plot the evolving adaptive clipping parameter over time under the same conditions as those in our static clipping experiments. This demonstrates ARC\\u2019s ability to dynamically adjust its threshold in response to ongoing training conditions.\\n\\n### Influence of Model Initialization\\nWe believe the reviewer may have overlooked our analysis of model initialization on ARC's performance, which is presented in Figure 2b of the Introduction and Figure 6 in Section 5.2. In these experiments, we assess the impact of progressively worse initialization conditions on ARC-enhanced Robust-DSGD for MNIST with 10 honest workers, under extreme heterogeneity and at $\\\\alpha = 0.1$ with $f = 1$ adversarial worker. Beginning with well-chosen initial parameters, we scale them by a factor $\\\\mu$\\nwhere increasing $\\\\mu$ corresponds to worse initialization ($\\\\mu \\\\in \\\\{1, ..., 5\\\\}$).\\nOur findings indicate that ARC significantly improves performance under good initialization ($\\\\mu=1$), resulting in substantial gains in accuracy. As initialization degrades, ARC\\u2019s advantage over vanilla Robust-DSGD gradually diminishes. However, even under poor initialization ($\\\\mu=5$), ARC still matches the performance of unmodified Robust-DSGD. Importantly, ARC\\u2019s behavior aligns closely with Byzantine-free DSGD (which also degrades in performance under unfavorable initialization conditions), while plain Robust-DSGD struggles to leverage well-initialized models effectively, maintaining lower accuracy around 20\\\\% in Figure 2b.\\nThis analysis underscores the positive influence of good initialization on ARC\\u2019s effectiveness and empirically supports our theoretical insights.\\n\\n### Intuition for the Design of ARC\\nWe thank the reviewer for prompting a discussion on ARC\\u2019s design and clipping mechanism.\\nLemma B.1 in Appendix B shows that $\\\\mathbf{F} \\\\circ \\\\mathbf{Clip}_C$ is $(f, \\\\tilde \\\\kappa)$-robust, provided that $\\\\lvert S \\\\setminus S_c\\\\rvert \\\\geq 1$ for all subsets $S$ of size $n - f$.\\nNote that this condition is impossible to guarantee when using a fixed clipping threshold that does not depend on the given set of input vectors. In order to ensure that $\\\\lvert S\\\\setminus S_c\\\\lvert \\\\geq 1$ for all subsets $S$ of size $n-f$, the clipping threshold $C$ should be large enough such that less than $n - f$ input vectors are clipped. \\nAccordingly, we propose to choose a clipping threshold such that the total number of clipped vectors is of the form $\\\\lfloor \\\\lambda (n-f) \\\\rfloor$, where $\\\\lambda < 1$. Note that it is natural to clip more vectors when the fraction of adversarial workers $\\\\frac{f}{n}$ is large, to control the impact of Byzantine behavior.\\nTherefore, we set $\\\\lambda \\\\coloneqq \\\\zeta \\\\frac{f}{n}$ where $0 \\\\leq \\\\zeta \\\\leq 2$. Since $\\\\frac{f}{n}<\\\\frac{1}{2}$ and $\\\\lambda < 1$, the number of clipped vectors $\\\\lfloor \\\\zeta \\\\frac{f}{n} (n - f)\\\\rfloor < n-f$ for all $\\\\zeta \\\\in [0, 2]$. \\nThis constitutes the underlying idea behind our adaptive clipping strategy ARC.\\nFurthermore, note that our theory holds for all $\\\\zeta \\\\in [0, 2]$, but we observed empirically that $\\\\zeta = 2$ provides ARC with the best performance.\\nWe thank the reviewer for this question. We will indeed include this discussion in the paper.\"}", "{\"comment\": \"I appreciate the authors' clarification on my earlier questions. I would like to seek further clarification on the following:\\n\\nThe computation of the adaptive threshold relies on knowing the current number of Byzantine clients in the system, $f$. However, estimating this parameter can be challenging due to the unpredictable nature of Byzantine faults. Could you provide practical guidelines for estimating $f$ when determining adaptive thresholds for clipping? Additionally, what are the potential impacts of overestimating or underestimating the number of attackers on the robustness and performance of ARC?\"}", "{\"comment\": \"We thank Reviewer n1Gk for the comments, which we discuss below.\\n\\n### Comparison With Static Clipping\\nWe appreciate the reviewer\\u2019s insight into the importance of a direct comparison between ARC and static clipping methods. However, we believe there may be a minor misunderstanding, as we have indeed included an extensive comparison with static clipping in Appendix F (referenced in line 256, Section 3). In this section, we perform experiments using Robust-DSGD on MNIST, evaluating the performance of static clipping across different settings and comparing it with ARC. Specifically, we tested a range of clipping values $C \\\\in $ {0.02, 0.2, 2, 20}, covering various heterogeneity levels.\\n\\nOur findings indicate that no single static clipping value consistently yields robust performance across diverse scenarios, largely due to the inherent sensitivity of static clipping to factors like data heterogeneity and the nature of Byzantine attacks. Since the server cannot directly access data or a priori determine the attack executed by Byzantine workers, tuning $C$ optimally for static clipping becomes infeasible. This highlights the necessity of a robust, adaptive clipping mechanism like ARC that naturally adjusts to these conditions. In contrast, ARC consistently adapts to different heterogeneity levels and adversarial settings, providing robust performance without the need for parameter tuning.\\n\\nIn addition to the results presented on MNIST, we have also conducted further empirical tests on Fashion-MNIST and CIFAR-10, which support the same observations. Although these results were omitted from the paper, we would be happy to provide them in Appendix G of the revised paper, demonstrating that ARC consistently outperforms static clipping across datasets and conditions. Additionally, for the final version of the paper, we intend to move some key results from Appendix F to the main body to also showcase ARC\\u2019s advantages over static clipping.\\n\\n### Large-scale experiments\\nWe acknowledge the reviewer\\u2019s point regarding network size and appreciate the suggestion to evaluate ARC\\u2019s performance with a greater number of agents. While we used 10 honest workers on MNIST to simulate extreme heterogeneity by assigning each worker a single label, we agree that expanding our study to larger networks would enhance the practical relevance of our results. Preliminary experiments with 30 honest workers on MNIST (tripling the initial system size) affirm ARC\\u2019s consistent empirical advantages even in larger setups. We are currently running additional experiments in this setting, and will make these results available shortly in Appendix G of the revised paper.\\nRegarding CIFAR-10, we chose a network of 17 workers to create a system larger than what we had for MNIST, as suggested. However, we are constrained by the significant computational resources required for these experiments, which limits our ability to expand further.\\nWe hope the reviewer can appreciate this trade-off, as CIFAR-10\\u2019s computational demands are high, and we aimed to balance system size with experimental feasibility.\"}", "{\"comment\": \"We would like to draw Reviewer Ap5T\\u2019s attention to Appendix G.2, where, as requested, we have included a detailed runtime comparison of the computational performance of NNM and ARC across varying numbers of workers and model sizes. These experiments clearly demonstrate the significantly lower computational cost of ARC compared to NNM, particularly as the number of workers increases.\\n\\nWe encourage the reviewer to review this new section, and thank them for their valuable feedback.\"}", "{\"summary\": \"This paper proposes a novel strategy called Adaptive Robust Clipping (ARC) for Byzantine-resilient distributed machine learning. Empirical results show that using ARC can significantly enhance Byzantine resilience compared to the methods without clipping. Theoretical analysis of convergence is also provided to show the effect of ARC.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper is generally well-written.\\n2. The idea of adaptive clipping intuitively makes sense and has an excellent empirical performance in the experiments of this work.\\n3. Byzantine resilience in distributed learning is an important and timely topic.\", \"weaknesses\": \"Although the proposed ARC strategy is generally not hard to implement and has a good empirical performance, there are major concerns about the theoretical analysis in this paper, which I specify point by point below.\\n\\n1. The theoretical results in section 3 show that $F\\\\circ ARC$ is $(f,3\\\\kappa)$-robust when $F$ is $(f,\\\\kappa)$-robust (Theorem 3.2). Although the property of $ARC$ is much better than trivial clipping (as shown in Lemma 3.1), the convergence guarantee obtained from Theorem 3.2 for $F\\\\circ ARC$ is worse than that for $F$ (without $ARC$). In other words, the theoretical results in section 3 show that $ARC$ has a better property than trivial clipping, but do not show that using $ARC$ can improve the convergence guarantees.\\n\\n2. The improvement of convergence guarantees for $ARC$ is mainly shown by Theorem 5.2. Theorem 5.2 says that when the maximum gradient norm of the initial point is not larger than $\\\\zeta$, using $ARC\\\\circ F$ can guarantee to find a point $\\\\hat{\\\\theta}$ such that the square norm of the gradient at $\\\\hat{\\\\theta}$ is not larger than $v \\\\epsilon_0$ in expectation. However, $v \\\\epsilon_0$ can be much larger than $\\\\zeta^2$ (which is specified in the next paragraph). Briefly speaking, the result of Theorem 5.2 can be even weaker than the conditions, which makes the theorem meaningless. \\n- Since $\\\\xi \\\\leq \\\\min(\\\\frac{v}{\\\\Phi(G,B,\\\\rho)},\\\\xi_0)$, it is obtained that $\\\\xi \\\\leq \\\\frac{v}{\\\\Phi(G,B,\\\\rho)}$, and thus $v\\\\geq \\\\xi \\\\cdot \\\\Phi(G,B,\\\\rho)=\\\\xi\\\\cdot 640(1+\\\\frac{1}{B^2})^2(1+\\\\frac{B^2\\\\rho^2}{G^2}).$ Therefore, \\n$v\\\\epsilon_0 \\\\geq [\\\\xi\\\\cdot 640(1+\\\\frac{1}{B^2})^2(1+\\\\frac{B^2\\\\rho^2}{G^2})]\\\\cdot[\\\\frac{1}{4}\\\\cdot\\\\frac{G^2(f/n)}{1-(2+B^2)(f/n)}].$ The term $\\\\rho^2=\\\\exp (2\\\\frac{(2+B^2)\\\\Delta_0}{(1-\\\\xi_0)G^2}L)\\\\zeta^2$ can be much larger than $\\\\zeta^2$. Thus, $v\\\\epsilon_0$ can be much larger than $\\\\zeta^2$, which will make Theorem 5.2 meaningless.\\n\\nOverall, the idea of ARC is interesting. The ARC method is easy to implement and has a good empirical performance. However, the theoretical analysis in the current version does not show the improvement of convergence guarantees, and can be misleading.\", \"questions\": \"Please focus on the weakness of Theorem 5.2 in the rebuttal. Specifically, please compare the value of $v\\\\epsilon_0$ with $\\\\zeta^2$ in Theorem 5.2 (or address the concern in some different ways). I am willing to raise the score if the concerns are properly addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We would like to inform Reviewer n1Gk that the requested larger-scale experiments on CIFAR-10 are completed. In these experiments, we considered a larger system consisting of $n\\u2212f=33$ honest workers with\\n$f=2$ and $f=3$ Byzantine workers.\\n\\nThe results of these additional experiments have been included in Appendix G.3.2 of the revised paper. Notably, these experiments confirm and reinforce the observations presented in the main paper, demonstrating ARC\\u2019s ability to enhance performance even in larger systems. Given their significance, we plan to incorporate these results into the main paper to further highlight ARC\\u2019s scalability and effectiveness.\\n\\nWe sincerely thank the reviewer for their constructive feedback and the insightful changes they proposed. We hope we have adequately addressed all concerns, and we respectfully anticipate that these revisions will reflect positively in the reviewer\\u2019s final assessment.\"}", "{\"summary\": \"The paper introduces an adaptive gradient clipping method applied to worker outputs before passing them through a robust aggregator in heterogeneous synchronous Byzantine settings. The authors address a practical issue, as they observe that while fixed gradient clipping can enhance performance in some cases, it may also impair it in others. To ensure robustness while utilizing gradient clipping, they propose an adaptive strategy that adjusts the clipping threshold according to the magnitude of worker outputs, applying clipping selectively to only a subset of them. Experimental results across various Byzantine scenarios and robust aggregators, tested on MNIST and CIFAR-10 datasets, demonstrate the effectiveness of this adaptive approach when combined with the established NNM method. The authors further support their method with theoretical guarantees.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The paper proposes an adaptive method that maintains the robustness guarantees of the aggregators it employs while improving their practical performance, especially under high heterogeneity. The authors provide valuable insights into selecting the clipping threshold, demonstrating that a fixed threshold for all workers, commonly used in practice, may be inefficient in some cases and does not meet robust criteria. They also emphasize the gap between Byzantine theory and practical applications, highlighting that existing theory may not fully capture real-world performance.\", \"weaknesses\": [\"Considering the critical role that numerical evaluation plays in supporting the paper\\u2019s claims,\", \"The paper introduces an adaptive clipping approach designed to work with any robust aggregator independently of NNM. However, the numerical results primarily showcase its effectiveness only when combined with the NNM aggregator (and it is unclear if NNM was also used in Figure 6; if so, this single example may be insufficient). Since NNM has a computational complexity of $O(dn^2)$, it would be valuable to assess the performance of this approach with other robust aggregators (without integrating NNM) to explore potentially lower computational costs. For instance, the CWTM ($O(dn \\\\log{n})$) or the $\\\\epsilon$-approximation GM ($O(dn + d\\\\epsilon^{-2})$) might offer alternatives that may retain robustness in practice while reducing time complexity. Conducting such experiments could provide a more comprehensive evaluation and emphasize the approach\\u2019s practicality.\", \"The CIFAR-10 evaluation is somewhat limited, with only one Byzantine worker out of 17. Expanding the evaluation to include a higher proportion of Byzantine workers and testing on more complex datasets could better demonstrate the method\\u2019s effectiveness in more practical scenarios.\"], \"questions\": [\"How does NNM contribute to achieving the guarantees outlined in Theorem 5.2? Is it possible to attain similar results on robust aggregators without incorporating NNM?\", \"Using a fixed clipping threshold can often be effective in homogeneous Byzantine settings. How does the adaptive approach perform compared to a fixed threshold in such cases?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' follow-up explanations. My concerns are almost properly addressed. As I promised, I will increase my rating to 8.\\n\\nMeanwhile, I still politely disagree with the authors on the statement that 'BP can be arbitrarily small'. I understand that for any $\\\\epsilon>0$, there exists a case such that $BP-f/n<\\\\epsilon$. However, the total worker number $n$ and the breakdown point BP are determined by the problem and should not be considered as variables. Thus, the statement could be misleading. I hope that the authors could re-consider this statement in future versions.\"}", "{\"summary\": \"This paper explores enhancing the robustness of distributed machine learning in the presence of Byzantine clients. The authors propose Adaptive Robust Clipping (ARC) that improves the robustness of Robust-DGD beyond traditional static clipping techniques. ARC dynamically adjusts clipping thresholds based on gradient magnitudes, allowing it to better counteract adversarial impacts.\\nThe authors provide experiments to demonstrate that ARC improves model robustness against various attacks compared to static clipping methods as well as theory showing that ARC has a similar convergence rate as the classical ones known in the literature.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The main strengths are:\\n\\n-The authors propose Adaptive Robust Clipping (ARC), a new mechanism to enhance robustness in adversarial settings.\\n\\n-The authors show that ARC almost retains the theoretical robustness guarantees of existing Robust methods while enhancing their practical performance. \\n\\n-The authors validate ARC through several experiments.\", \"weaknesses\": \"The main weaknesses are:\\n\\n-Increased complexity produced by ARC in practical implementation\\n\\n-ARC performance depends on good model initialization which may degrade the performance in the case of poor initialization.\\nDid you try some experiments to assess this?\\n\\n-While ARC improves robustness by adaptively clipping gradients, its thresholding could risk clipping too aggressively in certain settings, potentially discarding useful gradient information.\", \"questions\": \"See section before.\", \"in_addition\": \"-In Figure 1, C=2 for static clipping (SC) is too small. I think that is why you have a very bad performance of SC. You need to test with bigger values of C as well for SC.\\n\\n-Can you report the Adaptive C of ARC over steps in these plots?\\n\\n- Line 94, can you justify why one needs to clip this exact k number gradients? what happens if one clip less or more than this number of gradients?\\n\\n-The intuitive k is the number of potential malicious workers which is f?\\n\\n-Line 238, you require \\\\kappa B^2 < 1, (which means B should be small) can you give an example of loss where B is small?\\n\\n-Line 264, I disagree with the comment that \\\"ARC does not introduce a significant overhead\\\" especially in the case of large models. It will be good to have some experiments with runtime on the x-axis \\n\\n-ARC theory requires that the model is well-initialized, it will be good to assess numerically the impact of the initialization on the performance of ARC\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Dear Reviewers,\\n\\nWe are pleased to inform you that the additional experiments requested have now been completed. These experiments, which address the specific points raised in the reviews, have been incorporated into the revised version of the paper. For ease of reference, we have included these results in Appendix G, and highlighted all additions and revisions in blue.\\n\\nThe new results further reinforce the claims made in the paper and provide additional clarity on the robustness and adaptability of ARC in various settings. We encourage you to review these updates and welcome any further comments or suggestions you may have.\\n\\nThank you once again for your time and consideration.\"}", "{\"comment\": \"We sincerely thank the reviewer for their prompt response and willingness to engage in improving the paper. While we acknowledge the reviewer's point that the improvement guaranteed by Theorem 5.2 is more prominent when $n$ is large, the theorem is still applicable when $n$ is small as we explain below.\\n\\n### **BP$-f/n$ can be smaller than $1/n$.** \\nWe respectfully disagree with the reviewer's assertion that the difference between $f/n$ and BP must be at least $1/n$. This difference can, in fact, be significantly smaller than $1/n$, depending on the system configuration and heterogeneity parameter $B$. The BP (Breakdown Point) is defined as the maximum fraction of Byzantine workers that the system can tolerate while maintaining robustness. This value is a constant, independent of $f$ and $n$, and depends on the heterogeneity parameter $B$, as noted in line 419 of the paper. Specifically, BP is given by $\\\\frac{1}{2 + B^2}$. Consequently, there exist configurations where $f/n$ can be extremely close to BP. For example, consider the case where $B^2 = 1$, giving BP $= 1/3$. Now imagine a system with a small number of workers, $n = 10$, among which $f = 3$ are Byzantine. In this case, $f/n = 0.3$, and $\\\\text{BP} - f/n = 1/30 < 1/n$. The difference between $f/n$ and BP can thus be arbitrarily small. Therefore, the inequality on $\\\\upsilon$ provided by the reviewer, which assumes a minimum difference of $1/n$, does not universally hold.\\n\\n### **Practical scope of Theorem 5.2.**\\nThat said, we appreciate the reviewer\\u2019s point regarding certain practical limitations of Theorem 5.2. Indeed, there are scenarios where $\\\\text{BP} - f/n$ equals $1/n$ (or more). For instance, consider the same system with $n = 10$ workers but with $f = 2$. In this configuration, $\\\\text{BP} - f/n \\\\geq 0.1 = 1/n$. In this particular case, the inequality provided by the reviewer holds, and Theorem 5.2 does not guarantee an improvement. We acknowledge this practical limitation and will include a discussion in the paper to clearly outline these scenarios. As the reviewer correctly pointed out, when $n$ is large, the range of $f$ for which Theorem 5.2 guarantees an improvement is also large. To **summarize**:\\n- When $n$ is **small** relative to $\\\\psi(G, B, \\\\rho)$, Theorem 5.2 may guarantee an improvement for only one value of $f$ (i.e., the largest value of $f$ for which $f/n$ is smaller than BP.).\\n- When $n$ is **large** relative to $\\\\psi(G, B, \\\\rho)$, as the reviewer pointed out, Theorem 5.2 guarantees an improvement for a wide range of $f$.\\n\\nThe primary purpose of Theorem 5.2 is to demonstrate that the established lower bound in Byzantine ML can be circumvented in strong adversarial regimes where $f/n$ is very close to the BP, provided the models of honest workers are bounded at initialization. Moreover, our extensive empirical results consistently show a strict improvement induced by ARC over Robust-DSGD across systems of varying sizes (from small to large) and for different numbers of Byzantine workers $f$.\\n\\n### **Planned Revisions.** \\nWe will incorporate the reviewer\\u2019s feedback into the final version of the paper by discussing the above practical limitations of Theorem 5.2 and their implications.\\n\\nOnce again, we thank the reviewer for their thoughtful and constructive feedback, and we hope this clarification adequately addresses all remaining concerns.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"metareview\": \"This paper proposes a new method for adaptive gradient clipping in Byzantine-resilient distributed setting. It provides both theoretical and empirical support for the method. The discussion during the rebuttal for this paper centered on theoretical rigor, experimental breadth, and practical applicability of Adaptive Robust Clipping (ARC) in Byzantine-resilient distributed learning. Reviewers raised concerns about the limitations of the theoretical guarantees, particularly in highly adversarial regimes, the dependence of ARC on robust initialization, and the lack of direct comparisons to static clipping in the main text. Authors addressed these points by providing detailed clarifications on theoretical underpinnings, incorporating additional experiments comparing ARC to static clipping, and demonstrating ARC's scalability through new larger-scale experiments. Finally, all reviewers were very positive and hence I recommend acceptance.\", \"additional_comments_on_reviewer_discussion\": \"There was discussion about the strength of the paper's theoretical results, the performance with more Byzantine workers, etc. The authors have addressed these issues.\"}", "{\"comment\": \"We sincerely thank all the reviewers for their constructive comments and insightful suggestions. Your feedback has been invaluable in improving the quality and clarity of the paper.\"}", "{\"comment\": \"We thank Reviewer zvxn for the comments, which we discuss below.\\n\\n### On Combining ARC with NNM\\nWe appreciate the reviewer\\u2019s suggestion to explore ARC\\u2019s performance independently of NNM. In the main paper, our experiments primarily highlight ARC\\u2019s improvement when paired with NNM, the state-of-the-art mixing algorithm for addressing heterogeneity in Byzantine ML. This choice is driven by prior findings showing that NNM significantly enhances the robustness of existing aggregations, especially under conditions of high data heterogeneity or a large proportion of Byzantine workers.\\nWithout NNM, robust aggregations such as CWTM or GM alone yield poor empirical results in high heterogeneity regimes, as demonstrated in prior work on NNM (see *Fixing by mixing: A recipe for optimal byzantine ml under heterogeneity* (Allouah et al.)).\\nFurthermore, in homogeneous settings where NNM is not required, ARC results in comparable or better empirical performance compared to vanilla robust aggregations without ARC.\\nAdditionally, we confirm that NNM was indeed used in Figure 6, and we will clarify this in the final version.\\n\\n### On CIFAR-10 and More Complex Datasets\\nIn our CIFAR-10 evaluation, we used only one Byzantine worker among $\\nn=17$, aiming to demonstrate that even a single adversarial worker ($f=1$, or under 6\\\\% of the system) can significantly degrade the learning when ARC is not used in Robust-DSGD. We agree that assessing ARC under a higher proportion of Byzantine workers would further underscore its practical value. We have already begun running additional experiments with more Byzantine workers, which so far confirm ARC\\u2019s robustness benefits under increased $f$ values. We will include these results in the revised version of the paper in Appendix G when they are finalized.\\n\\nRegarding dataset complexity, our current evaluation leverages standard benchmarks in Byzantine ML: MNIST, Fashion-MNIST, and CIFAR-10. Although these datasets are less challenging in conventional ML, they introduce considerable difficulty under Byzantine settings, as confirmed by prior research (see *Fixing by mixing: A recipe for optimal byzantine ml under heterogeneity* (Allouah et al.), *Byzantine machine learning made easy by resilient averaging of momentums* (Farhadkhani et al.),*Byzantine-Robust Learning on Heterogeneous Datasets via Bucketing* (Karimireddy et al.)). Extending Byzantine ML evaluations to more complex datasets is a promising future direction, and we agree that it could further substantiate ARC\\u2019s applicability in practical, large-scale scenarios.\\n\\n### Performance of ARC in Homogeneous Settings Compared to Static Clipping\\nThank you for raising the question about ARC\\u2019s performance under homogeneous conditions compared to static clipping. As detailed in Appendix F, we compared ARC with various static clipping strategies across three heterogeneity levels. Notably, in moderate heterogeneity in Figure 17 (corresponding to sampling from a Dirichlet distribution with parameter $\\\\alpha = 1$) \\u2014 the closest setting to homogeneity considered \\u2014 ARC performs on par with the best static clipping strategies (e.g., $C = 2$ and $C = 20$) and outperforms others (e.g., $C = 0.2$ and $C = 0.02$). Additional experiments we conducted in completely homogeneous settings further support these observations, showing that ARC matches or exceeds the performance of the best static clipping strategies available. Thus, ARC remains effective even in low-heterogeneity environments, adapting well to homogeneous scenarios without compromising robustness.\\n\\n### On NNM's Importance for the Guarantees of Theorem 5.2\\nWe thank the reviewer for this insightful question.\\nNNM ensures that the robustness coefficient $\\\\kappa \\\\in \\\\mathcal{O}(f/n)$ (more specifically, see Lemma 2.5), making it possible to induce an improvement when $f/n$ approaches the breakdown point. This improvement could not be demonstrated otherwise, i.e., if $\\\\kappa$ is not proportional to $f/n$.\"}", "{\"comment\": \"We thank the reviewer for the for detailed feedback and constructive comments that will help improve the paper.\\n\\nWe acknowledge the reviewer's point regarding BP - $f/n$ being arbitrarily small.\\nWe agree that $n$ and BP are problem-specific constants rather than variables.\\nWe will clarify this point in the revised version of the paper, and ensure that the limitation on the applicability of Theorem 5.2 when $n$ is small is properly addressed.\"}", "{\"comment\": \"We thank Reviewer KAQT for the comments, which we discuss below.\\nWe would like to clarify the purpose of this theorem and address the specific points raised.\\n\\n### Purpose of Theorem 5.2\\nTheorem 5.2 aims to demonstrate the improvement induced by ARC over existing methods in strong adversarial regimes, specifically when the corruption fraction $\\\\frac{f}{n}$ approaches the breakdown point (BP) of the system. In this extreme setting, ARC ensures that the error is strictly better than the lower bound, provided the norms of honest gradients are bounded at model initialization.\\n\\n### Addressing the Reviewer\\u2019s Concerns\\nThe reviewer\\u2019s calculations are correct, but they do not fully consider the role of $\\\\xi$, which measures how close the corruption fraction $\\\\frac{f}{n}$ is to the BP.\\nIn Theorem 5.2, $\\\\xi$ explicitly influences the final error bound $\\\\upsilon\\\\varepsilon_o$.\\nTherefore, the lower bound of $\\\\upsilon\\\\varepsilon_o$ computed by the reviewer can be made arbitrarily small since it is proportional to $\\\\xi$, by considering $f/n$ to be arbitrarily close to the BP.\\nWhile $\\\\rho^2$ can indeed be much larger than $\\\\zeta^2$, this does not render the theorem meaningless. The large value of $\\\\rho^2$ reflects the difficulty of maintaining robustness in extreme adversarial regimes. However, the proportionality of $\\\\upsilon\\\\varepsilon_o$ to $\\\\xi$ ensures that the bound remains relevant. Specifically, as $\\\\xi$ becomes small, the influence of $\\\\rho^2$ diminishes relative to the overall bound.\\nWe refer the reviewer to Appendix C, specifically Theorem C.4 and Corollary C.5, for additional details on the convergence of Robust DGD with ARC. We hope that these additional results will make things clearer.\\n\\n### Proposed Revisions for Clarity\\nTo address potential confusion and enhance the clarity of Theorem 5.2, we will include specific values for $v$ and $\\\\xi_0$ in the final version of the paper.\\nIndeed, we will consider a special case of the theorem by choosing $\\\\xi_o = 0.5$. In this case, $\\\\rho \\\\coloneqq \\\\exp\\\\left( \\\\frac{2 (2 + B^2)\\\\Delta_o L}{G^2} \\\\right) \\\\zeta$.\\nFurthermore, let $\\\\boldsymbol{\\\\upsilon} \\\\leq \\\\xi_o = 0.5$. Since, in this particular case, $\\\\frac{\\\\boldsymbol{\\\\upsilon}}{\\\\Psi(G, B, \\\\rho)} \\\\leq \\\\xi_o$ (because $\\\\Psi(G, B, \\\\rho) \\\\geq 1$), Theorem 5.2 implies that **if** $0 < \\\\xi \\\\leq \\\\frac{1}{2}\\\\frac{\\\\boldsymbol{1}}{\\\\Psi(G, B, \\\\rho)}$ **then** $\\\\mathbb{E} {\\\\lVert \\\\nabla \\\\mathcal{L}_{\\\\mathcal{H}} \\\\hat{\\\\left ( \\\\theta \\\\right)} \\\\rVert}^2 \\\\leq \\\\boldsymbol{\\\\frac{1}{2}} ~ \\\\varepsilon_o < \\\\varepsilon_o$.\\nThis special case focuses on the regime of small $\\\\xi$, explicitly illustrating how $\\\\upsilon\\\\varepsilon_0$ (where $\\\\upsilon = \\\\frac{1}{2}$) improves over existing methods under these conditions. Additionally, we will emphasize that the strict improvement shown in Theorem 5.2 is particularly significant when $\\\\frac{f}{n}$ is close to the BP, aligning with the theorem's primary objective.\\n\\n### Meaningfulness of Theorem 5.2.\\nWe believe Theorem 5.2 presents an important result in Byzantine-robust distributed learning. The theorem shows that if the fraction of adversarial workers $f/n$ is close to the breakdown point then the lower bound on the learning error under $(G, B)$-gradient dissimilarity can be circumvented, provided that the honest workers' gradients at model initialization are bounded. This result opens up a new line of research of considering pragmatic distributed learning settings that avoid the worst-case scenarios in the presence of adversarial workers, thereby improving robustness guarantees.\\n\\n### Robustness Preservation by ARC\\nYes, we agree that Theorem 3.2, which shows that ARC preserves $(f, 3 \\\\kappa)$-robustness, is *not sufficient* for proving an improvement. Indeed, we show that ARC has an additional property shown in Lemma 5.1 (in Section 5), which in conjunction with Theorem 3.2, yields an improvement in the learning error characterized in Theorem 5.2.\\n\\nWe hope that the above properly addresses the reviewer's concerns. In which case, we expect the reviewer to raise the score of our paper.\"}" ] }
03EkqSCKuO
Port-Hamiltonian Architectural Bias for Long-Range Propagation in Deep Graph Networks
[ "Simon Heilig", "Alessio Gravina", "Alessandro Trenta", "Claudio Gallicchio", "Davide Bacciu" ]
The dynamics of information diffusion within graphs is a critical open issue that heavily influences graph representation learning, especially when considering long-range propagation. This calls for principled approaches that control and regulate the degree of propagation and dissipation of information throughout the neural flow. Motivated by this, we introduce port-Hamiltonian Deep Graph Networks, a novel framework that models neural information flow in graphs by building on the laws of conservation of Hamiltonian dynamical systems. We reconcile under a single theoretical and practical framework both non-dissipative long-range propagation and non-conservative behaviors, introducing tools from mechanical systems to gauge the equilibrium between the two components. Our approach can be applied to general message-passing architectures, and it provides theoretical guarantees on information conservation in time. Empirical results prove the effectiveness of our port-Hamiltonian scheme in pushing simple graph convolutional architectures to state-of-the-art performance in long-range benchmarks.
[ "graph representation learning", "long-range propagation", "ordinary differential equations" ]
Accept (Poster)
https://openreview.net/pdf?id=03EkqSCKuO
https://openreview.net/forum?id=03EkqSCKuO
ICLR.cc/2025/Conference
2025
{ "note_id": [ "znEvBo7vK5", "wK8Eq7MpOa", "u6vCziQS3p", "nvIbV12O4i", "mBESaHnIj8", "m22GJ5oQ7G", "kZk3TVKsno", "iqHdhXFvpw", "fzqL2Z3Olu", "fX2wgVmI6Y", "dvcmzyF55B", "d6JJf0KjwN", "VkDpeGIb3U", "T7478EvGp9", "SYAaRs1WdH", "S2CUtq5iKs", "R125jFd7tn", "O9fpxBdnrZ", "Lw6pbguCws", "LM5DS4s349", "KCRLEboAzz", "J7rugzU8yy", "H9PCwNoHeA", "Fe0GVofRt9", "9lI1rHmIvj", "9JFBNqaAUN", "7bnoXBWIn9", "5uvZaoIX0C", "5Le4FyHyzD" ], "note_type": [ "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732485545819, 1732021756221, 1737523859626, 1730645938384, 1732019934246, 1732441168385, 1732441213032, 1732019606238, 1733149289647, 1732021440627, 1732615983191, 1730706862818, 1732021226286, 1732441228521, 1732548526872, 1732019839720, 1732673929850, 1732285259487, 1733157356146, 1732021563638, 1732679901180, 1733150503659, 1734725502284, 1729822898690, 1732549003684, 1732021824868, 1732713919006, 1732578944721, 1732020204116 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_X6rz" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_X6rz" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_7PXw" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Area_Chair_ei3M" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_7PXw" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ], [ "ICLR.cc/2025/Conference/Submission7738/Reviewer_XxSd" ], [ "ICLR.cc/2025/Conference/Submission7738/Authors" ] ], "structured_content_str": [ "{\"title\": \"Increasing score\", \"comment\": \"Thank you for your thorough response. I also appreciate the new theorems and the ablation studies in Appendix D, which further demonstrate the strength and robustness of PH-DGN. Although some results are borderline, I do think modeling forward dynamics of GNN as an explicit Hamiltonian dynamics plus dissipation is an interesting idea. Therefore, I am increasing my score.\"}", "{\"title\": \"Rebuttal Part 1\", \"comment\": \"We sincerely thank the Reviewer for the consistently positive feedback and the valuable insights provided on our work. In particular, we appreciate the Reviewer\\u2019s recognition of the ***very clear motivation*** behind our approach, as well as its ***originality*** and ***cleverness***. Additionally, we are grateful for the acknowledgment of our ***strong theoretical results***, our ***superb experimental setup***, and the ***consistently favorable results*** demonstrated. We are pleased to address each comment in detail, following the original order.\\n\\n**Regarding theoretical insights in the full port-Hamiltonian case**\\n\\nWe appreciate the Reviewer\\u2019s effort in improving the quality of our work. In this regard, we included the Reviewer\\u2019s suggestion in Appendix B.7, where we derived new additional theoretical results on the effects of the port-Hamiltonian components on information propagation. Specifically, we observe that the sensitivity matrix can be linearly decomposed into the conservative and dissipative terms. Under the assumption that the driving forces and their derivatives are bounded, the self-influence of a node after one update can be constrained within fixed upper and lower bounds, which are mediated (among the other) by the step-size $\\\\epsilon$. Additionally, we demonstrate that a similar upper bound applies to the influence between neighboring nodes. These results indicate that, under mild assumptions, the port-Hamiltonian components theoretically support long-range propagation.\\n\\n**Regarding the explanation of Theorem 2.3**\\n\\nWe thank the Reviewer for carefully reading our manuscript and helping us to further improve on the presentation of our work. We acknowledge that the wording \\u201cretains its complete past\\u201d may be ambiguous and leaves room for interpretation about how \\u201cstrong\\u201d the past influences the final representation. However, since the sensitivity matrix does not vanish, we believe that the information can not be discarded. Moreover, as our upper bound on the self-node BSM in Appendix A.1 shows, the further the node representation lies in the past, the higher is its influence on the final state to achieve the energy conservation regime. We are happy to further discuss with the Reviewer on this regard, since we do not fully grasp the intended meaning of the \\u201cimply something stronger\\u201d comment.\\n\\n**Regarding the explanations of the performance gains coming from driving forces**\\n\\nWe thank the Reviewer again for pointing us to opportunities to further improve our work. Dampening and external forces have great interpretations stemming from physical phenomena, like a pendulum that is suffering friction losses but at the same time is externally moved around in space. In our scenario, we believe that the learned driving forces act as an adaptive filter mechanism that filters noisy information in the embedding evolution, facilitating the learning of relevant information, thus resulting in improved performance on the downstream task. However, visualizing the filtering mechanism poses a significant challenge because we don\\u2019t have explicit knowledge of what constitutes \\\"relevant\\\" versus \\\"noisy\\\" information. Similarly, visualizing the high-dimensional node dynamic trajectories is also not trivial, since the 2D representation may not reflect the actual non-linear behavior in the latent space. \\n\\n**Regarding minor typos**\\n\\nWe thank the Reviewer for thorough reading of our work. We corrected them in the revised paper.\\n\\n**Regarding classical GCN in Theorem 2.4**\\n\\nWe thank the Reviewer for the comment, since our goal is to provide a general framework which can be used with many different aggregation schemes. Considering the difference between our vanilla neighborhood aggregation and the degree-normalized GCN sum, we find that the constants $\\\\frac{1}{\\\\sqrt{d_u}\\\\sqrt{d_v}}$ can be injected as part of a weighted adjacency matrix into our proof. For the ease of presentation, we opt for omitting these kinds of scaling factors in the proof.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"This paper proposes PH-DGN, a new GNN based on the port-Hamiltonian system to develop GNN that can solve graph learning tasks that require long-range dependencies. Two variants of PH-DGN are proposed: a conservative PH-DGN based on the Hamiltonian system and a PH-DGN based on the port-Hamiltonian system by introducing learnable dissipative terms.\\nThe theoretical analyses show that the conservative PH-DGN is stable and energy-preserving as a dynamical system, and derive a lower bound for the sensitivity, implying the possibility of long-range interaction.\\nNumerical experiments show that the conservative PH-DGN is energy-preserving without gradient vanishing using synthesis datasets (Section 3.1) and that the long-range interaction can be achieved in graph learning tasks that require long-range interactions (Sections 3.2, 3.3). Also, the usefulness of two variants of PH-DGN is evaluated on long-range graph benchmarks of real datasets.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The background knowledge of the port-Hamiltonian system is explained carefully, increasing accessibility for readers unfamiliar with this topic.\", \"For conservative PH-DGN, the ability of long-range interaction is theoretically shown by giving the lower bounds of sensitivity.\", \"The validity of the proposed methods for synthesis datasets is properly demonstrated.\"], \"weaknesses\": [\"As the title indicates, the theme of this paper is the relationship between GNN and long-range propagation based on the port-Hamiltonian system. However, no theoretical guarantees on long-range propagation are given for general PH-DGNs with dissipative components.\", \"The tables of experimental results could be clearer (in particular Tables 2 and 5).\", \"For $\\\\mathrm{PH-GDN}_{\\\\mathrm{C}}$, which has theoretical guarantees, the prediction performance on the real dataset (long-range graph benchmark) does not outperform existing methods, and hence its practical usefulness is limited.\"], \"questions\": [\"**Questions**\", \"l.161: What is the definition of anti-derivative of an activation function $\\\\sigma$?\", \"l.223, l.228: Which norm is used in $\\\\| \\\\partial \\\\mathbf{b}_u(T) / \\\\partial \\\\mathbf{b}_u(T-t) \\\\|$?\", \"l.259: *Therefore, [...]. , it holds the capability of PH-GDN to perform long-range propagation effectively*: the authors compare the upper bounds of sensitivity for the existing model (Theorem 2.3) and the proposed model (Theorem 2.4), then argue that since the latter is larger than the former, the proposed model is more effective in performing long-range propagation. However, it is difficult to claim so because there is the possibility that Theorem 2.4 only shows a looser bound than Theorem 2.3, which does not exclude the possibility that the upper bound does not properly reflect the sensitivity of the PH-GDN. It should be shown theoretically or experimentally that this upper bound adequately reflects the sensitivity of the PH-GDN.\", \"l.1129: I want to clarify the setup of the Graph Transfer Task: If I understand correctly, the feature vectors of all nodes are 1-dimensional and randomly sampled from $\\\\mathrm{Unif}([0, 0.5))$. The target value is 0 for the source node and 1 for the target node. Assuming that this problem setup is correct, I need help understanding why this task is solvable because the model cannot distinguish source or target nodes from other nodes from feature vectors.\", \"**Minor Comments**\", \"l.196: *non-dissipative (i.e., long-range)*: I think this is a slightly misleading expression. My understanding is that this paper uses non-dissipative in the sense of energy-perserving. Although non-dissipative implies long-range propagation (Theorem 2.3), they are not equivalent. In fact, this paper argues that non-dissipative PH-DGN performs better than conservative PH-DGN in predicting the LRGB dataset in some numerical experiments.\", \"l.436: The use of position encoding is only mentioned in the caption of Table 2 and should be explicitly stated in the text, specifically in the setup of Section 3.4.\", \"l.436: The correspondence between Table 2 and Table 5 needs to be clarified. For example, the method titled *MPNNs* in Table 2 is titled *re-evaluated* in Table 5. However, GCNII is not listed as *re-evaluated*, but as *MPNNs*. Furthermore, it is difficult to tell from the captions of Table 5 whether the authors test each listed method by themselves or is a citation of existing research. The reference should be indicated for each method in Table 5, for example, by adding a column such as *reference* to Table 5.\", \"l.443: *Overall, our port-Hamiltonian framework [...] shows great benefit [...] without requiring additional strategies such as global position encoding, global attention mechanism, or rewiring techniques [...].*: I suggest clarifying how the usefulness of each strategy is shown by comparing it with existing methods. More specifically:\", \"The superiority of the proposed method over global position encoding is justified by the comparison with MPNN-based models using position encoding.\", \"The superiority over the global attention mechanism is by comparison with the Transformer-based method.\", \"The superiority of rewiring methods by comparison with Drew.\", \"l.1153: The reference to Adam (Kingma & Ba, 2015) should be cited.\", \"l.1343: n. layers -> the number of layers\", \"l.1353: Table 6 claims that PH-DGN achieves both Hamiltonian conservation and Learnable driving forces. However, I think this is misleading because to achieve Hamiltonian conservation, the dissipative component must be removed, resulting in $\\\\mathrm{PH-DGN}_{\\\\mathrm{C}}$. It is not possible to achieve both characteristics at the same time in one architecture. Instead, two variants that achieve only one of the two are proposed.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N.A.\", \"rating\": \"5\", \"confidence\": \"2\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Part 3\", \"comment\": \"**Regarding the relation between the nature of the task with the balance between conservative and dissipative parts**\\n\\nWe appreciate the Reviewer's constructive comment. In general, we believe that hyperparameter tuning should be performed to select the right balance between conservative components and driving forces components to achieve the best performance. Nevertheless, our intuition from our experiments in Section 3 and Appendix D.4 is that a purely conservative PH-DGN may have improved utility for tasks that require preserving ***all*** information. This is the case of propagating all node labels over the graph, or counting the number of nodes with a given label. Differently, driving forces, by acting as adaptive filters, tend to enhance performance on real-world tasks characterized by high noise levels.\\n\\n\\n**Regarding additional experiments on real Hamiltonian dynamics simulation**\\n\\nWe thank the Reviewer for the suggestions to demonstrate which other application fields could benefit from our work. Simulating or learning dynamical behavior stemming from physical or chemistry-based generation processes is a very interesting field of application, explored in seminal works on physics-inspired neural networks, which explicitly aim to learn a specific Hamiltonian dynamic based on observations. We designed our model with the goal of improving the long-range capabilities of DGNs, employing port-Hamiltonian systems theory to compute information-rich embeddings that do not (necessarily) mimic (quantum)physical behavior but rather include long-range dynamics. Therefore, we aimed to transform the evolution of the port-Hamiltonian system dynamics into the unfolding of iterative applications of information propagation in DGNs, thereby introducing non-dissipative properties into the DGN\\u2019s design. With this general aim in mind, we believe that our approach is broadly applicable to a variety of graph-based tasks where long-range dependencies are critical, going beyond strictly physical systems. Nevertheless, approaching molecular dynamics simulations or other problems governed by physical Hamiltonian dynamics is an exciting direction for future research, as it could provide advantages even in such domains.\"}", "{\"title\": \"Response to authors' rebuttal (1/3)\", \"comment\": \"**1) Regarding the theoretical guarantees for the general PH-DGN**\\n\\n> To further accommodate the Reviewer's suggestion, we derived some additional theoretical results on the effects of the port-Hamiltonian components on information propagation in Appendix B.7. [...] These results indicate that, under mild assumptions, the port-Hamiltonian components theoretically support long-range propagation.\\n\\nI thank the authors for providing additional theoretical results on PH-DGNs with dissipative components. From Theorem B.1--B.3, I understand that when we introduce the small dissipative components, the perturbation of the BSM is small as well, guaranteeing the non-vanishing gradient and long-range propagation.\\n\\n\\n**2) Regarding Tables 2 and 5**\\n\\n> In our revised paper, we have improved the clarity of both tables by better specifying which result comes from which paper, and whether or not the model uses positional/structural encodings.\\n\\nI thank the authors for updating the tables. The update improves their clarity. I have several clarifications:\\n\\n1. Does **Re-evaluated** in Table 5 mean that the results of GCNII+PE/SE and DRew-GCN+PE/SE are re-evaluated by the authors and not cited from T\\u00f6nshoff et al (2023)?\\n2. If 1 is true, could you let me know why the authors re-evaluated these models?\\n\\nIn addition, I have the following suggestions for further improvement. However, since this is a matter of taste, it is OK that the authos opts not to adopt them:\\n\\n1. While the order of Table 2 is T\\u00f6nshoffetal -> Multi-hop (Gutteridge) -> Transformers (Gutteridge), that of Table 5 is Multi-hop (Gutteridge) -> Transformers (Gutteridge) -> T\\u00f6nshoff. Tables 2 and 5 should have the same order of sections.\\n2. I think the section title of T\\u00f6nshoffetal in Table 5 should be **MPNNs** rather than **Re-evaluated** because not all models are re-evaluated and other section titles are model names (e.g., Multi-hop DGNs)\\n\\n\\n**3) Regarding the usefulness of PH-DGN$_C$ on the real dataset**\\n\\n> For $\\\\mathrm{PH-GDN}_{\\\\mathrm{C}}$, which has theoretical guarantees, the prediction performance on the real dataset (long-range graph benchmark) does not outperform existing methods, and hence its practical usefulness is limited.\\n\\nI thank the authors for responding to my concerns about the practical usefulness of $\\\\mathrm{PH-GDN}_{\\\\mathrm{C}}$\\n\\n> We appreciate the Reviewer's comment. First, we would like to highlight that, given our PH-DGN is designed through the lens of differential equations, the most relevant baselines for comparison are DE-DGN models.\\n\\nI think the design of the numerical experiment should reflect the messages the authors want to claim. From the practitioners' point of view, one of the most important criteria for selecting models is prediction performances. For them, the origin of the model architecture, especially whether the architecture is derived from differential equations, is not important. If the authors emphasize the comparison between PH-DGN with other DE-DGN models, I want to clarify what what the authors want to claim by this comparison.\\n\\n> Furthermore, PH-DGN$_C$ outperforms global graph transformers and multi-hop approaches across both tasks and ranks as the third-best overall model on the peptide-func dataset.\\n> Overall, PH-DGN$_C$ outperforms 12 out of 13 baselines on peptide-func and 7 out of 13 baselines on the other task.\\n\\nAs in the previous discussion, I want to clarify what the authors want to claim by these comparisons.\\n\\n> To provide empirical support for this intuition, we have included in the revised manuscript (Appendix D.4) an ablation study on the Minesweeper task from (Platonov et al. (2023)) and on the graph transfer task.\\n\\nI want to clarify the experiment setting of this task, more specifically, the following part:\\n\\n> The model selection is split into two stages. First, the best purely conservative PH-DGN (i.e., PH-DGN$_C$) is selected from a grid with total number of layers L\\u2208{1,5,8}, embedding dimension d\\u2208{256,512}, step size \\u03f5\\u2208{0.1,1.0} according to the validation ROC-AUC. Then, the best selected configuration is tested with different driving forces.\\n\\nGiven training and validation datasets, I wonder how the learnable parameters in models for dampening and external forces are used for choosing the hyperparameters of PH-DGN$_C$.\\n\\n-------------------\\n\\n**Regarding the term \\\"anti-derivative\\\"**\\n\\nI thank the authors for the explanation. I did not know the primitive integral is also called the anti-derivative. I am sorry that I should have checked it by myself.\\n\\n\\n**Regarding the norm of the sensitivity**\\n\\n> Since $[\\\\partial x_u(T)/ \\\\partial x_u(T-t)]$ is an $\\\\mathbb{R}^{d\\\\times d}$ matrix, our statements are valid for all sub-multiplicative matrix norms, e.g., p-norm and Frobenius norm\\n\\nI understand that the proof is valid for any sub-multiplicative norm. I suggest explicitly writing that the norm is any sub-multiplicative to the statement.\"}", "{\"title\": \"Response to authors' rebuttal (2/3)\", \"comment\": \"**Regarding upper bound sensitivity**\\n\\n> [...] To clarify, Theorem 2.3 provides a lower bound for the influence of the same node $u$, while Theorem 2.4 establishes an upper bound for the influence between two different nodes, $u$ and $v$, both derived using our proposed model.\\n\\nI thank the authors for pointing it out. Since this question was based on my misunderstandings, I want to withdraw it.\\n\\n\\n> The study of upper bounds for interactions between different nodes was recently proposed by Topping et al. (2022) and Di Giovanni et al. (2023) as a means to characterize the long-range propagation problem and it is currently a widely accepted means to characterize the long-range propagation problem within the community of deep learning for graphs.\\n\\nI agree with the authors' arguments that assuming that the upper bound of theoretical guarantees reflects the properties of ML models is one of the standard (if not perfect) methods in ML theory, such as statistical learning theory.\\n\\n> This claim is supported by our experimental results presented in Section 3. Specifically, our model (with the bigger upper bound) achieves better results on all long-range tasks than MPNN models (with a smaller upper bound, as shown in Topping et al. 2022 and Di Giovanni et al. 2023). \\n\\nI also agree that the model's better performance in long-range tasks justifies the comparison using the upper bounds. \\n\\n\\n**Regarding the graph transfer task**\\n\\n> [...] The source node is assigned with feature \\\"1\\\", the target with feature \\\"0\\\", and intermediate nodes with a random feature sampled from a uniform distribution in $[0, 0.5)$. Then, we implemented a supervised task (specifically, a regression problem) that measures how much of the source node information has reached the target node. [...]\\n\\nI thank the authors for clarification of the problem setup. However, I still do not understand the setup. I guess I could not fully figure out what the authors assume implicitly. More specifically, I understand the feature vector is 1 for the source node, 0 for the target node, and random value from $[0, 0.5)$ for intermediate nodes. Then, how about the target value of the regression task? Do we assign a value to each node in a graph (i.e., node prediction task) or a single value to the whole graph (i.e., graph prediction task)? \\n\\n\\n**Regarding non-dissipativeness**\\n\\n> The Reviewer is correct that we designed our PH-DGN as a non-dissipative system in the sense of \\\"energy-preserving\\\". Specifically, we built our PH-DGN on top of recent literature on non-dissipative dynamical systems (Haber et al. (2017); Bo et al. (2019); Gravina et al. (2023)), which show that a non-dissipative behavior allows capturing long-term (in the case of time series) and long-range (in the graph domain) dependencies. In a broader sense, the energy of the system can be associated to the node information, since it is linked to the node embedding sensitivity. We believe that our PH-DGN with driving forces achieves better performance on the real-world LRGB because the driving forces act as an adaptive filter mechanism that filters noisy information, facilitating the learning of relevant information. As discussed in the newly introduced Appendix D.4, there are scenarios where a purely non-dissipative approach is more beneficial, as it ensures no loss of information.\\n\\nI thank the authors for the detailed explanation of the intuition about the relationship between the non-dissipative system and long-range dependencies.\\n\\n> Although we believe there is a strong correlation between long-range and non-dissipative, we acknowledge the source of misunderstanding and rephrased the sentence in the revised draft.\\n\\nI do not deny the strong correlation between the two concepts. Instead, I also think they are interdependent. However, whether they are equivalent is unknown; we need more studies to claim so. Therefore, I think it is better to treat these concepts differently.\\n\\n\\n**Regarding positional encoding in the text**\\n\\n> The positional/structural encoding refers only to MPNN baselines, while our model does not rely on such an approach. We clarified this aspect in the revised manuscript (Table 2 and Section 3.4).\\n\\nOK. I thank the authors for the clarification.\\n\\n\\n**Regarding Tables 2 and 5**\\n\\n> We thank the Reviewer for the effort in improving the quality of our work. We clarified this aspect in the revised manuscript.\\n\\nOK. See my response to **2) Regarding Tables 2 and 5** for my suggestions.\\n\\n\\n**Regarding the comparison clarification**\\n\\n> Again, we thank the Reviewer for its effort and we refer them to the revised manuscript, which now contains a deeper clarification on the usefulness of the proposed model with respect to existing methods.\\n\\nOK. I thank the authors for the clarification.\"}", "{\"title\": \"Rebuttal Part 1\", \"comment\": \"We thank the Reviewer for highlighting the ***solid***, ***general***, and ***sound*** nature of our approach, as well as the ***improved efficiency*** over previous methods. We are also grateful for recognizing the ***clarity*** of our work and its ***practical value***, especially on long-range propagation. Lastly, we thank the Reviewer for the constructive and positive comments on our manuscript. We will reply to each weakness and question in original ordering. In particular, aside from the requested clarifications, following up the Reviewer\\u2019s suggestion, we have added a novel experiment on a recent benchmark that allows appreciating the impact of the different dissipative components in our approach.\\n\\n\\n**Regarding existing ideas**\\n\\nThe Reviewer raises an important comment. As correctly noted by the Reviewer, our PH-DGN belongs to the family of DE-DGNs, thus it builds upon previous works on message passing and neural-ODEs. However, PH-DGN offers a mathematically grounded approach by leveraging port-Hamiltonian dynamical systems to balance non-dissipative long-range propagation and non-conservative behaviors, which to the best of our knowledge has never been done before. We then verified this property on a comprehensive experimental suite. Although our PH-DGN builds upon known theory (which however has been proposed and used in a totally different context), we believe that our model provides a more general and efficient solution designed to tackle the problem of long-range propagation in graphs, which represents a significant challenge for message passing models. Moreover, in contrast to Hamiltonian-based approaches, our PH-DGN is capable of deviating from purely conservative trajectories when needed by the downstream task. In other words, it can employ driving forces as an adaptive filter mechanism that filters noisy information, facilitating the learning of relevant long-range information. We believe that these considerations together with our theoretical and experimental results shed light on the novelty of our approach.\\n\\n**Regarding the derivation of the discretization scheme**\\n\\nWe appreciate the Reviewer\\u2019s effort to further improve the clarity of our work. Due to the submission page limit, we deliberately opt to omit the details of the final discretized model in the main text. All necessary steps together with background information on symplectic integration schemes are presented in Appendix A.3. Specifically, Eq. 11 provides the discretization step for our PH-DGN ***without*** driving forces. We explicitly rewritten Eq. 11 by decomposing it into its p and q components in Eqs 12 and 13. The explicit discretization steps for our PH-DGN ***with*** driving forces are given in Eqs. 13 and 14. We are happy to discuss further if specific steps of the discretization are still unclear. \\n\\n**Regarding the structure of W and V**\\n\\nWe thank the Reviewer for the comment, and we are happy to elaborate on this aspect, since we believe it is a crucial step in the derivation of the discrete model. Following the Hamiltonian formalism, we recall that the global state $x$ can be decomposed into the two components $p$ and $q$. Therefore, imposing a block diagonal structure for $W$ and $V$ is required only theoretically in the global formulation of our model (i.e., Eq. 11) to ensure the separation of components in the explicit integration scheme. Looking at the explicit scheme (i.e., Eqs. 12 and 13 or Eqs. 13 and 14), each block in the original $W$ and $V$ are $\\\\mathbb{R}^{d/2\\\\times d/2}$ matrices with no constraints on the structure. Thus, $W_p$, $W_q$, $V_p$, and $V_q$ can be considered as independent standard weight matrices. We made this point more clear in the revised manuscript.\\n\\n\\n**Regarding why the port-Hamiltonian framework is more appropriate than simpler alternatives**\\n\\nWe thank the Reviewer for the comment. As evidenced by our experimental results and previous literature on long-range propagation (Dwivedi et al. (2022); Di Giovanni et al (2023)) standard MPNN-based models, which can be considered as the simplest DGN models, are insensitive to information contained at distant nodes. To counteract this limitation, recent literature introduced global attention mechanisms or rewiring techniques. While effective, these approaches significantly increase the complexity of information propagation due to the use of denser graph shift operators. In contrast, our PH-DGN achieves state-of-the-art performance without relying on such additional techniques, thus, providing a lightweight model that inherently supports long-range propagation by design. \\n\\n*References:* \\n\\nDwivedi et al. Long Range Graph Benchmark. NeurIPS 2022. \\n\\nDi Giovanni et al. On over-squashing in message passing neural networks: the impact of width, depth, and topology. ICML 2023\"}", "{\"comment\": \"Dear Reviewer XxSd,\\n\\nAs the rebuttal period is closing soon, we would like to thank you again for the detailed feedback provided in your review, and for the continued discussion and engagement with us.\\n**We have made significant efforts to address each of your comments, including additional theoretical statements, experiments and clarifications.**\\nOverall, we believe that our extensive responses helped us to improve the quality of our paper and should address the reviewer's concerns. Therefore, we would like to thank you for your guidance and kindly ask you to consider revising your evaluation.\\n\\nSincerely,\\n\\nThe Authors.\"}", "{\"title\": \"Rebuttal Part 3\", \"comment\": \"**- Regarding upper bound sensitivity**\\n\\nWe thank the Reviewer for the insightful comment. To clarify, Theorem 2.3 provides a lower bound for the influence of the same node $u$, while Theorem 2.4 establishes an upper bound for the influence between two different nodes, $u$ and $v$, both derived using our proposed model. The study of upper bounds for interactions between different nodes was recently proposed by Topping et al. (2022) and Di Giovanni et al. (2023) as a means to characterize the long-range propagation problem and it is currently a widely accepted means to characterize the long-range propagation problem within the community of deep learning for graphs. We agree with the Reviewer that an upper bound may not fully reflect the actual long-range capability of the model. However, we believe that, since information cannot vanish and our PH-DGN$_C$ can theoretically propagate more information, our PH-DGN$_C$ is theoretically more effective in long-range propagation. This claim is supported by our experimental results presented in Section 3. Specifically, our model (with the bigger upper bound) achieves better results on ***all*** long-range tasks than MPNN models (with a smaller upper bound, as shown in Topping et al. 2022 and Di Giovanni et al. 2023). Therefore, we believe that our theoretical bounds reflect the practical sensitivity of PH-DGN, demonstrating its superior capability to propagate information over long ranges compared to previous state-of-the-art models.\\n\\nTopping et al. Understanding over-squashing and bottlenecks on graphs via curvature. ICLR 2022\\n\\nDi Giovanni et al. On over-squashing in message passing neural networks: the impact of width, depth, and topology. ICML 2023\\n\\n**- Regarding the graph transfer task**\\n\\nWe refer the Reviewer to Appendix C.2 for deeper insights on the data and the task of the graph transfer experiment. We built this experiment based on the graph transfer task proposed by Di Giovanni et al. (2023). In simple words, it is an information-exchange task where we measure how far the information can travel within the graph. The source node is assigned with feature \\u201c1\\u201d, the target with feature \\u201c0\\u201d, and intermediate nodes with a random feature sampled from a uniform distribution in [0, 0.5). Then, we implemented a supervised task (specifically, a regression problem) that measures how much of the source node information has reached the target node. On large graphs, the more information reaches the target node, the more effective the model is in the long-range regime. We clarified this in the revised manuscript.\\n\\nDi Giovanni et al. On over-squashing in message passing neural networks: the impact of width, depth, and topology. ICML 2023\\n\\nWe now move to the minor comments, following original ordering. Again, we thank the Reviewer for working with us in improving the quality of our work.\\n\\n**- Regarding non-dissipativeness**\\n\\nThe Reviewer is correct that we designed our PH-DGN as a non-dissipative system in the sense of \\u201cenergy-preserving\\u201d. Specifically, we built our PH-DGN on top of recent literature on non-dissipative dynamical systems (Haber et al. (2017); Bo et al. (2019); Gravina et al. (2023)), which show that a non-dissipative behavior allows capturing long-term (in the case of time series) and long-range (in the graph domain) dependencies. In a broader sense, the energy of the system can be associated to the node information, since it is linked to the node embedding sensitivity. We believe that our PH-DGN with driving forces achieves better performance on the real-world LRGB because the driving forces act as an adaptive filter mechanism that filters noisy information, facilitating the learning of relevant information. As discussed in the newly introduced Appendix D.4, there are scenarios where a purely non-dissipative approach is more beneficial, as it ensures no loss of information. Although we believe there is a strong correlation between long-range and non-dissipative, we acknowledge the source of misunderstanding and rephrased the sentence in the revised draft.\\n\\nHaber et al. Stable architectures for deep neural networks. Invers Problems 2017\\n\\nBo et al. AntisymmetricRNN: A dynamical system view on recurrent neural networks. ICLR 2019\\n\\nGravina et al. Anti-Symmetric DGN: a stable architecture for Deep Graph Networks. ICLR 2023\\n\\n**- Regarding positional encoding in the text**\\n\\nThe positional/structural encoding refers only to MPNN baselines, while our model does not rely on such an approach. We clarified this aspect in the revised manuscript (Table 2 and Section 3.4).\\n\\n**- Regarding Tables 2 and 5**\\n\\nWe thank the Reviewer for the effort in improving the quality of our work. We clarified this aspect in the revised manuscript.\"}", "{\"comment\": \"We thank the Reviewer for the quick response.\\n\\n**Regarding the evaluation protocol in Minesweeper task.**\\n\\nWe thank the Reviewer for the question, and we are happy to clarify on this aspect. The evaluation protocol that we used for the Minesweeper task is the following:\\n1) We first select the hyperparameters that PH-DGN have in common with PH-DGN$_C$ using the original training and validation datasets proposed in Platonov et al. (2023), as the Reviewer correctly stated;\\n2) Without performing any additional model selection for the driving forces, we retrained all variants of PH-DGN using the training set and evaluate their performance on both the validation and test sets;\\n3) For reporting purposes, we color-coded the best results in the table based on the validation score, which is now included in the revised manuscript for improved clarity on this aspect.\\n\\nWe would like to emphasize that all hyperparameter tuning is conducted strictly using the training and validation sets, while the test set is reserved exclusively for final evaluation, ensuring that it does not influence hyperparameter selection at any stage and thereby eliminating any risk of data leakage.\"}", "{\"summary\": \"This paper introduces port-Hamiltonian Deep Graph Networks (PH-DGN), a new framework for graph neural networks that addresses the challenge of long-range information propagation. The approach embeds message passing within a port-Hamiltonian dynamical system framework, where node states are split into position (q) and momentum (p) components coupled through skew-symmetric matrices. The Hamiltonian function incorporates graph structure through neighborhood aggregation functions, and the port-Hamiltonian extension introduces learnable dissipative components (internal dampening and external forces) that allow the network to balance conservative information preservation with task-dependent information modification. The authors provide theoretical analysis of the framework's properties, including bounds on sensitivity and gradient behavior, and demonstrate empirically that their method outperforms existing approaches on several tasks requiring long-range information propagation, including synthetic graph property prediction tasks and real-world molecular property prediction. The framework can incorporate different message passing schemes and provides improved efficiency compared to previous Hamiltonian-based graph neural networks, with experimental results showing that while the purely conservative version performs well, including the dissipative components often leads to better task performance.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Overall, the idea of using Hamilton's dynamics for GNN is attractive, though not entirely new.\", \"Nevertheless, the paper is solid and is more general than existing approaches.\", \"The improved efficiency over previous Hamiltonian GNN approaches and the practical benefits for long-range propagation make it a useful contribution to the field.\", \"The experimental results are also interesting.\", \"1. Technical soundness:\", \"Clear theoretical analysis of conservation properties\", \"Explicit connection between Hamiltonian dynamics and message passing\", \"Thorough experimental validation\", \"2. Practical value:\", \"More efficient than previous Hamiltonian GNN approaches\", \"Good performance on long-range tasks\", \"Can incorporate different message passing schemes\", \"3. Clarity:\", \"Well-structured presentation\", \"Good balance of theory and empirics\", \"Clear comparisons to prior work\"], \"weaknesses\": [\"1. Novelty is incremental:\", \"Builds on existing ideas (Hamiltonian GNNs, message passing)\", \"Main contribution is combining these effectively rather than fundamentally new concepts\", \"2. Technical questions:\", \"The derivation of the discretization scheme could use more justification\", \"Some assumptions about the structure of $W$ and $V$ matrices for explicit updates feel restrictive\", \"Could better explain why port-Hamiltonian framework is more appropriate than simpler alternatives\", \"3. Empirical:\", \"Some ablation studies could be stronger (e.g., analyzing impact of different dissipative terms)\", \"Could better justify hyperparameter choices\"], \"questions\": \"1. While the paper shows good empirical performance, it's unclear what types of problems would theoretically benefit most from a port-Hamiltonian approach versus standard message passing. Could the authors provide analysis or insights about which properties of the underlying data generation process would suggest using their method?\\n\\n2. The authors demonstrate that adding dissipative components often improves performance, but how does the balance between conservative (Hamiltonian) and dissipative parts relate to the nature of the task? It would be valuable to see an analysis of when pure conservation might be preferable to including dissipation, and how practitioners should choose this balance for new problems.\\n\\n3. Given that the method is inspired by physical Hamiltonian systems, it's surprising that there are no experiments on problems with actual Hamiltonian dynamics (e.g., molecular dynamics simulations). Such experiments could help validate whether the method's conservation properties provide advantages for physically meaningful conservation laws, beyond just improving general information flow.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Rebuttal Part 2\", \"comment\": \"**3) Regarding the usefulness of PH-DGN$_C$ on the real dataset**\\n\\nWe appreciate the Reviewer\\u2019s comment. First, we would like to highlight that, given our PH-DGN is designed through the lens of differential equations, the most relevant baselines for comparison are DE-DGN models. As shown in Table 2, PH-DGN$_C$ achieves superior performance compared to all DE-DGN baselines. Furthermore, PH-DGN$_C$ outperforms global graph transformers and multi-hop approaches across both tasks, and ranks as the third-best overall model on the peptide-func dataset.\\nOverall, PH-DGN$_C$ outperforms 12 out of 13 baselines on peptide-func and 7 out of 13 baselines on the other task. \\nIn general, our intuition is that PH-DGN$_C$ may have improved utility for tasks that require preserving ***all*** information, such as propagating all node labels over the graph or counting the number of nodes with a given label. To provide empirical support for this intuition, we have included in the revised manuscript (Appendix D.4) an ablation study on the Minesweeper task from (Platonov et al. (2023)) and on the graph transfer task. For your convenience, we report the Minesweeper results in the table below.\\nIn the Minesweeper task (i.e., a counting-based task), PH-DGN shows no advantage in deviating from a purely conservative regime. Notably, our PH-DGN$_C$ achieves a new state-of-the-art performance of 98.45 on this benchmark. These results suggest that model selection should determine the optimal use of port components depending on the specific data setting.\\n\\n\\n| **Model** \\t| **Train Score (ROC-AUC \\u2191)**\\t| **Test Score (ROC-AUC \\u2191)**\\t|\\n|-------------------------------------|--------------------------------|-------------------------------|\\n| Top-6 models form Luo et al. (2024) | | | \\n| GraphGPS | - | 90.75 \\u00b1 0.89 |\\n| SGFormer | - | 91.42 \\u00b1 0.41 |\\n| Polynormer | - | 97.49 \\u00b1 0.48 |\\n| GAT | - | 97.73 \\u00b1 0.73 |\\n| GraphSAGE \\t| - \\t| 0.9777 \\u00b1 0.0062 \\t|\\n| GCN \\t| - \\t| **0.9786 \\u00b1 0.0024** \\t|\\n| **Our - no driving forces** \\t| \\t| \\t|\\n| PH-DGN$_C$ \\t| 0.9978 \\u00b1 0.0005 \\t| **0.9845 \\u00b1 0.0021** \\t|\\n| **Our - with driving forces** \\t| \\t| \\t|\\n| *PH-DGN* \\t| \\t| \\t|\\n| *Dampening* / *External Force* \\t| \\t| \\t|\\n| -- / MLP4-Sin \\t| 0.9937 \\u00b1 0.0038 \\t| 0.9661 \\u00b1 0.0057 \\t|\\n| -- / DGN-tanh \\t| 0.9928 \\u00b1 0.0010 \\t| 0.9720 \\u00b1 0.0042 \\t|\\n| param / -- \\t| 0.9979 \\u00b1 0.0005 \\t| **0.9842 \\u00b1 0.0021** \\t|\\n| param / MLP4-Sin \\t| 0.9955 \\u00b1 0.0021 \\t| 0.9686 \\u00b1 0.0052 \\t|\\n| param / DGN-tanh \\t| 0.9930 \\u00b1 0.0019 \\t| 0.9727 \\u00b1 0.0029 \\t|\\n| MLP4-ReLU / -- \\t| 0.9962 \\u00b1 0.0057 \\t| 0.9533 \\u00b1 0.0065 \\t|\\n| MLP4-ReLU / MLP4-Sin \\t| 0.9993 \\u00b1 0.0003 \\t| 0.9567 \\u00b1 0.0064 \\t|\\n| MLP4-ReLU / DGN-tanh \\t| 0.9789 \\u00b1 0.0024 \\t| 0.9541 \\u00b1 0.0066 \\t|\\n| DGN-ReLU / -- \\t| 0.9496 \\u00b1 0.0017 \\t| 0.9342 \\u00b1 0.0061 \\t|\\n| DGN-ReLU / MLP4-Sin \\t| 0.9561 \\u00b1 0.0048 \\t| 0.9387 \\u00b1 0.0055 \\t|\\n| DGN-ReLU / DGN-tanh \\t| 0.9501 \\u00b1 0.0047 \\t| 0.9332 \\u00b1 0.0084 \\t|\\n\\nPlatonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. ICLR 2023\\n\\nNow we are happy to clarify on each question, following original ordering:\\n\\n**- Regarding the term \\u201canti-derivative\\u201d**\\n\\nWe use the term \\u201canti-derivative\\u201d to refer to a differentiable function $F$ whose derivative is equal to the original function $f$. Therefore, the anti-derivative of an activation function is the function $F(x)$ whose derivative leads to the original activation function, i.e., $F^\\\\prime(x) = \\\\sigma (x)$.\\n\\n**- Regarding the norm of the sensitivity**\\n\\nSince $[\\\\partial x_u(T)/ \\\\partial x_u(T-t)]$ is an $\\\\mathbb{R}^{d\\\\times d}$ matrix, our statements are valid for all sub-multiplicative matrix norms, e.g., p-norm and Frobenius norm\"}", "{\"title\": \"Response to authors' rebuttal (3/3)\", \"comment\": \"**Regarding Adam citation**\\n\\n> We included the citations to our employed optimization strategies in the revised manuscript\\n\\nOK. I thank the authors for adding the reference.\\n\\n\\n**Regarding n. layer typo**\\n\\n> We corrected the typo in the revised manuscript. Thank you.\\n\\nOK. \\n\\n\\n**Regarding Table 6**\\n\\n> Following the Reviewer's suggestion, we revised Table 6 to clarify that driving forces do not allow for purely Hamiltonian conservation.\\n\\nOK. I thank the authors for considering my comments and updating the table.\"}", "{\"title\": \"Response to Reviewer's rebuttal\", \"comment\": \"We thank the Reviewer for continuing to engage with us and for positively acknowledging the majority of our clarifying responses. Below, we provide our responses to the last open questions. We hope that these clarifications will help the Reviewer reconsider their overall assessment of our work.\\n\\n---\\n\\n**2) Regarding Tables 2 and 5**\\n\\nWe thank the Reviewer for acknowledging the improved clarity of our Tables 2 and 5. We are happy to further elaborate on the specific wording used in the Tables.\\nWe employ the term \\u201cre-evaluated\\u201d to point to the re-evaluation performed by T\\u00f6nshoff et al. (2023) with a different training protocol. Therefore, we used \\u201cre-evaluated\\u201d to reflect different performances of already presented models. We agree with the Reviewer that this term may be a reason of misunderstanding and hence we propose to rephrase it as \\u201cModified experimental protocol, T\\u00f6nshoff et al. (2023)\\u201d. \\n\\nThe Reviewer is correct that we computed the scores for GCNII+PE/SE and DRew-GCN+PE/SE strickly following the experimetal setup of T\\u00f6nshoff et al. (2023). The reason for this choice was (i) to further underscore the overall performance improvements introduced by our PH-DGN by providing the reader with a more complete picture, which includes the missing SOTA method in T\\u00f6nshoff et al.'s re-evaluation; and (ii) to comply with this Reviewer's request to include GCNII in the \\\"re-evaluated\\\" section of the table. \\n\\n**3) Regarding the usefulness of PH-DGN on the real dataset**\\n\\nWe agree with the Reviewer that prediction performance is one of the most critical criteria for model selection from a practitioner\\u2019s perspective. However, given that PH-DGN falls within the category of DE-DGNs, we believe that, from an analysis point of view, it is essential to first compare its performance against other models in the same class. These comparisons are particularly meaningful as they involve direct competitors that share a similar underlying rationale and complexity. Even though this analysis view is important, we believe that the more general practitioner\\u2019s perspective is also crucial. For such a reason, we compared the performance of our PH-DGN with approaches from different classes, such as graph transformers and multi-hop DGNs as in Section 3.4, which introduce denser graph-shift operators. Throughout all of our experiements, it emerges that PH-DGN and its fully conservative version (PH-DGN$_C$) achieve state of the art performance. Therefore, the fact that PH-DGN$_C$ is better than methods in the same class of models and better on average than current SOTA (from different classes) actually indicates a practical utility of PH-DGN$_C$. Therefore, we believe that the specific definition and characteristics of our model are crucial when it comes to datasets and tasks that require the exploitation of long-range propagation.\\n\\n**Regarding choosing the hyperparameters of PH-DGN$_C$ in Minesweeper task**\\n\\nThank you, for the question. Dampening and external forces do not play any role in the selection of the PH-DGN$_C$ hyperparameters, since such components are not employed in the purely conservative setting. \\n\\n**Regarding the norm of sensitivity**\\n\\nWe highlighted this point explicitly in the newly released version. Thank you.\\n\\n**Regarding the graph transfer task**\\n\\nWe thank the Reviewer for the follow-up question and are happy to clarify the setup further.\\nThe task is formulated as a node-level regression problem. The target output is constructed by modifying the input features: the source node's target is a feature vector 0, the target node's target is a feature vector 1, and the intermediate nodes retain their original random values from the input. In other words, the ground truth values are the switched node labels of source and target nodes. Then, the model is trained to predict the target values for all nodes, and the loss is computed as the MSE between the predicted values and the corresponding target values across the entire graph. We clarified this aspect in the revised manuscript.\\n\\n----------\\n\\nWe would like to thank the Reviewer again for the thoughtful comments and intriguing questions. We have now uploaded a revised version of our paper that includes all the discussions made above. Overall, we think that these discussions and suggestions made by the Reviewer helped us to improve the quality of our paper. We hope that you find our responses satisfactory, and that you will consider revising your score.\"}", "{\"title\": \"Rebuttal Part 2\", \"comment\": \"**Regarding the ablation studies**\\n\\nWe appreciate the Reviewer\\u2019s constructive feedback. To address their suggestion and enhance the quality of our work, we have included an additional benchmark and an ablation study in Appendix D.4 that examines the impact of different dissipative components on the Minesweeper task from Platonov et al. (2023). While the results indicate that certain driving components perform better than others for this task, we recommend performing model selection to identify the optimal components based on the specific data setting. For convenience, we present the results for this task in the table below.\\n| **Model** \\t| **Train Score (ROC-AUC \\u2191)**\\t| **Test Score (ROC-AUC \\u2191)**\\t|\\n|-------------------------------------|--------------------------------|-------------------------------|\\n| Top-6 models form Luo et al. (2024) | | | \\n| GraphGPS | - | 90.75 \\u00b1 0.89 |\\n| SGFormer | - | 91.42 \\u00b1 0.41 |\\n| Polynormer | - | 97.49 \\u00b1 0.48 |\\n| GAT | - | 97.73 \\u00b1 0.73 |\\n| GraphSAGE \\t| - \\t| 0.9777 \\u00b1 0.0062 \\t|\\n| GCN \\t| - \\t| **0.9786 \\u00b1 0.0024** \\t|\\n| **Our - no driving forces** \\t| \\t| \\t|\\n| PH-DGN$_{\\\\text{C}}$ \\t| 0.9978 \\u00b1 0.0005 \\t| **0.9845 \\u00b1 0.0021** \\t|\\n| **Our - with driving forces** \\t| \\t| \\t|\\n| *PH-DGN* \\t| \\t| \\t|\\n| *Dampening* / *External Force* \\t| \\t| \\t|\\n| -- / MLP4-Sin \\t| 0.9937 \\u00b1 0.0038 \\t| 0.9661 \\u00b1 0.0057 \\t|\\n| -- / DGN-tanh \\t| 0.9928 \\u00b1 0.0010 \\t| 0.9720 \\u00b1 0.0042 \\t|\\n| param / -- \\t| 0.9979 \\u00b1 0.0005 \\t| **0.9842 \\u00b1 0.0021** \\t|\\n| param / MLP4-Sin \\t| 0.9955 \\u00b1 0.0021 \\t| 0.9686 \\u00b1 0.0052 \\t|\\n| param / DGN-tanh \\t| 0.9930 \\u00b1 0.0019 \\t| 0.9727 \\u00b1 0.0029 \\t|\\n| MLP4-ReLU / -- \\t| 0.9962 \\u00b1 0.0057 \\t| 0.9533 \\u00b1 0.0065 \\t|\\n| MLP4-ReLU / MLP4-Sin \\t| 0.9993 \\u00b1 0.0003 \\t| 0.9567 \\u00b1 0.0064 \\t|\\n| MLP4-ReLU / DGN-tanh \\t| 0.9789 \\u00b1 0.0024 \\t| 0.9541 \\u00b1 0.0066 \\t|\\n| DGN-ReLU / -- \\t| 0.9496 \\u00b1 0.0017 \\t| 0.9342 \\u00b1 0.0061 \\t|\\n| DGN-ReLU / MLP4-Sin \\t| 0.9561 \\u00b1 0.0048 \\t| 0.9387 \\u00b1 0.0055 \\t|\\n| DGN-ReLU / DGN-tanh \\t| 0.9501 \\u00b1 0.0047 \\t| 0.9332 \\u00b1 0.0084 \\t|\\n\\nPlatonov et al. A critical look at the evaluation of GNNs under heterophily: Are we really making progress?. ICLR 2023\\n\\n**Regarding hyperparameter choices**\\n\\nAs detailed in Appendix C, our experiments adhered to the established procedures for each task to ensure fair evaluation and reproducibility. For hyperparameters specific to our PH-DGN, such as the step size $\\\\epsilon$ and the number of layers, we selected values from a thorough and reasonable range, taking into account factors like the average graph diameter in the training set. Lastly, we highlight that we conducted a thorough model selection over a comprehensive grid to minimize the risk of suboptimal performance. We have included this discussion in the revised manuscript. Thank you.\\n\\n**Regarding the problems tackled by PH-DGN** \\n\\nThe main objective of our work is to design the information flow within a graph as a solution of a port-Hamiltonian system to ***mitigate the challenge of long-range propagation in DGNs.*** Throughout Section 2, we provide theoretical statements to support the claim that our PH-DGN can effectively learn and propagate long-range dependencies between nodes. Afterward, in Section 3 we empirically support our theoretical findings by evaluating our PH-DGN on the graph transfer task, graph property prediction and the long-range benchmark, all specifically designed to test the model's capabilities in the long-range regime. In summary, we believe that our method is optimal for those problems that require the exploitation of long-range dependencies to be effectively solved. As an example, PH-DGN is beneficial to solve shortest-path-based problems, e.g., compute the diameter of a graph (see Section 3.3), or molecular tasks in which far away nodes interact to determine the overall function of the molecule (see Section 3.4). Furthermore, as emerged by our experiments in Section 3.4 and in the newly added Appendix D.4, the use of driving forces can lead to better performance on real-world tasks. The driving forces act as an adaptive filter mechanism that filters noisy information. Meanwhile, a purely conservative approach (i.e., __without__ driving forces) can have improved utility for tasks that require preserving ***all*** information, like the Minesweeper task.\"}", "{\"title\": \"Rebuttal Response\", \"comment\": \"Thank you for the thorough rebuttal and explanations! They increased my understanding of the work.\"}", "{\"title\": \"Rebuttal Summary\", \"comment\": \"We sincerely thank the reviewers for their thoughtful and detailed feedback, as well as for recognizing the key strengths of our work. We are happy to read that the reviewers found that our paper provide an ***attractive*** and ***novel*** methodology (Revs. X6rz, 7PXw) for the incorporation of port-Hamiltonian dynamics in graph representation learning. We are also grateful for the acknowledgment of the ***clarity*** (Revs. X6rz, XxSd, 7PXw) and ***technical soundness*** (Rev. X6rz) of our work while reporting ***strong theoretical results*** (Revs. X6rz, XxSd, 7PXw) and ***thorough*** (Rev. X6rz) and ***superb*** (Rev. 7PXw) experimental validation to show the practical benefits (Revs. X6rz, XxSd) for long-range propagation.\\n\\nWe are also thankful for the constructive feedback, which has further improved the quality of our paper. Specifically:\\n\\n**Additional Experiments:**\\n\\n- Following the Reviewer **X6rz**\\u2019s and Reviewer **XxSd** \\u2019s suggestions, we have added a novel experiment on a recent benchmark to: (i) appreciate the impact of the different dissipative components in our approach, and (ii) highlight the utility of the purely conservative version of PH-DGN (i.e., PH-DGN$_C$). To further address point (ii), we have also included an additional ablation study on the graph transfer task, demonstrating the improved utility of PH-DGN$_C$ in tasks that require preserving all information.\\n\\n**Revisions to the Paper:**\\n\\n- Following the Reviewer **XxSd**\\u2019s and Reviewer **7PXw**\\u2019s suggestions, we incorporated additional theoretical guarantees on long-range propagation that also accounts for the presence of non-conservative forces.\\n- In Appendix A.3, we clarified that the assumption on the structure of W and V matrices is not restricting the final implementation of $p$ and $q$ components. \\n- We improved the discussion on the choice of the hyperparameters.\\n- We improved the clarity of Tables 2 and 5 as well as provided a deeper clarification on the usefulness of the proposed model with respect to existing techniques in Section 3.4. \\n- We clarified the goal of the graph transfer task.\\n- We revised Table 6 to clarify that driving forces do not allow for purely Hamiltonian conservation.\\n\\n----\\n\\nAs the author-reviewer discussion comes close to an end, we would like to thank the reviewers again for their invaluable feedback and the positive assessment of our paper. We did our best efforts to provide a detailed response to reviewers\\u2019 comments, and we hope these revisions address all concerns while further emphasizing the significance and robustness of our contributions.\\n\\nIn particular, we would greatly appreciate hearing from Reviewer **XxSd** whether they were satisfied with our responses. We hope that this is the case, and, if so, we would like to kindly ask the reviewer to consider revising their score.\\n\\nThank you all again for your constructive feedback.\"}", "{\"comment\": \"We thank the Reviewer for the quick response to our message. We noticed that the response may contain several typos, which made some parts challenging to interpret. Below, we provide our reply based on our best understanding of the concerns raised.\\n\\nAs for the under-estimation of the general **PH-DGN** (i.e., the one with driving forces) in the ablation study in Appendix D.4, we want to emphasize again, that *the goal of this study was not to optimize performance but rather to investigate how different driving forces contribute to solve the task under a constant starting point*, i.e., the hyperparameters selected for PH-DGN$_C$. We do acknowledge that there may be a possible configuration of shared hyperparameters that could lead to better performances on the validation set for the general PH-DGN. However, while we agree that this is crucial for optimization purposes, it is outside the scope of our ablation, which is to show the effect of singular forces to the same purely conservative regime in the Minesweeper task. \\u200b\\u200bMoreover, it seems from the comment that the Reviewer may believe we are not retraining the general PH-DGN after selecting the hyperparameters for PH-DGN$_C$\\u200b. However, as we have previously explained, this is not the case. In fact, we retrain the model to optimize its learnable parameters using the selected shared hyperparameters to ensure a fair evaluation.\\n\\nFinally, since this is an ablation study and optimizing performance is not critical for its purpose, we believe the Reviewer\\u2019s concerns, while valid, may not significantly impact the evaluation of our contributions. In light of this, we kindly encourage the Reviewer to consider this context when assessing their score.\"}", "{\"title\": \"Rebuttal Part 4\", \"comment\": \"**- Regarding the comparison clarification**\\n\\nAgain, we thank the Reviewer for its effort and we refer them to the revised manuscript, which now contains a deeper clarification on the usefulness of the proposed model with respect to existing methods. \\n\\n**- Regarding Adam citation**\\n\\n We included the citations to our employed optimization strategies in the revised manuscript\\n\\n**- Regarding n. layer typo**\\n\\nWe corrected the typo in the revised manuscript. Thank you.\\n\\n**- Regarding Table 6**\\n\\nWe thank the Reviewer for the comment. Appendix Table 6 serves as a high-level comparison with related works on Hamiltonian inspired DGNs. We opted to report our framework as a single row in the table for simplicity reasons, since in our PH-DGN the driving forces can be turned on and off depending on the specific needs of the problem. Indeed, from a high-level perspective, the conservative approach could be seen as a subset of the full port-Hamiltonian approach, explaining why in the single row scenario we marked the Hamiltonian conservation column. Following the Reviewer\\u2019s suggestion, we revised Table 6 to clarify that driving forces do not allow for purely Hamiltonian conservation.\"}", "{\"comment\": \"Thank you for the explanation. Although I am not perfectly confident, I think this protocol may have the risk of overfitting slightly as re-trained learnable parameters implicitly depend on the validation dataset through the choice of hyperparameters of PH-DGN$_{C}$. However, the protocol itself looks OK because the test dataset is not used for choosing learnable parameters and hyperparaemters.\"}", "{\"comment\": \"I thank the authors for the further responses to my questions.\\n\\n> Regarding the potential risk of overfitting in this protocol, we kindly ask the Reviewer to elaborate further on their argument.\\n\\nFirst, I realize that the Dampening and the External forcing do not have hyperparameters, which I overlooked in the last comment. I agree with the authors in that we do not have to choose hyperparameters of these components using the validation dataset.\\n\\nStill, I think that there is a possibility that the performance of PH-DGN$_C$ could be underestimated by the authors' protocol. In order to search the best hyperparameters from the set of possible hyperparameters (which we denote by $\\\\Theta$), we need to search *all* possible spaces of learnable parameters for each hyperparameter $\\\\theta \\\\in \\\\Theta$ (using the training dataset), then choose the best hyperparameter (using the validation dataset). However, in the authors' protocol, since we only search the part of learnable parameters to choose the hyperparameter, we could fail to find the best model.\\n\\n-------------------\\n\\nFor example, for simplicity, suppose we only have two learnable parameters --- $w$ for PH-DGN$_C$ and $w'$ for the Dampening and the External forcing, and one hyperparameter $\\\\theta$ (shared by PH-DGN$_C$ and PH-DGN), which takes only two values $\\\\theta=0, 1$. \\n\\nFor PH-DGN$_C$, we assume:\\n- When we fix $\\\\theta=0$, the model achieves the best performance $p_0$ at $w=a_0$, \\n- When we fix $\\\\theta=1$, the model achieves the best performance $p_1$ at $w=a_1$, \\n\\nwhere $p_0 > p_1$.\\n\\nFor PH-DGN, we assume:\\n- When we fix $\\\\theta=0$, the model achieves the best performance $q_0$ at $(w, w') = (a_0, b_0)$,\\n- When we fix $\\\\theta=1$, the model achieves the best performance $q_1$ at $(w, w') = (a_1, b_1)$,\\n\\nwhere $q_0 < q_1$\\n\\nThen, the best-performing PH-DGN is $(w, w') = (a_1, b_1)$ and $\\\\theta=1$, which achieves $q_1$. However, if we follow the authors' protocol, we first choose $\\\\theta=0$ because we have $p_0 > p_1$, then choose the learnable parameter $(w, w') = (a_0, b_0)$ to get the sub-optimal PH-DGN, which achieves $q_0$.\"}", "{\"metareview\": \"**(a) Scientific Claims and Findings:**\\nThe paper introduces a novel framework called port-Hamiltonian Deep Graph Networks (pH-DGNs). This framework models neural information flow in graphs by leveraging principles from Hamiltonian dynamical systems, aiming to address challenges in long-range information propagation within graph representation learning. By incorporating both non-dissipative and non-conservative behaviors, the approach seeks to balance information conservation and dissipation, providing theoretical guarantees on information preservation over time. Empirical results demonstrate that pH-DGNs enhance the performance of simple graph convolutional architectures, achieving state-of-the-art results on benchmarks requiring long-range propagation.\\n\\n**(b) Strengths:**\\n* Innovative Framework: The introduction of port-Hamiltonian systems into graph neural networks offers a fresh perspective on managing information flow, potentially addressing limitations in existing architectures concerning long-range dependencies.\\n* Theoretical Foundations: The framework is grounded in well-established principles from Hamiltonian dynamics, providing a solid theoretical basis for the proposed approach.\\n* Empirical Performance: The proposed method demonstrates superior performance on benchmarks that require long-range information propagation, indicating its practical effectiveness.\\n* Applicability: The approach can be integrated into general message-passing architectures, suggesting broad applicability across various graph-based learning tasks. \\n\\n**(c) Weaknesses:**\\n* Complexity: The incorporation of port-Hamiltonian systems may introduce additional complexity into the model, potentially impacting computational efficiency and implementation.\\n* Scope of Evaluation: While the empirical results are promising, the evaluation is primarily focused on benchmarks requiring long-range propagation. Assessing the framework's performance across a wider range of tasks and datasets would provide a more comprehensive understanding of its capabilities. The empirical improvements are limited in some cases.\\n* Practical Implementation Details: The paper could benefit from a more detailed discussion on the practical aspects of implementing the proposed framework, including computational requirements and potential challenges in real-world applications.\\n\\n**(d) Reasons for Acceptance:**\\nAfter a thorough evaluation of the paper, I recommend acceptance based on the following considerations:\\n1. Novel Contribution: The paper presents a unique integration of port-Hamiltonian systems into graph neural networks, offering a new approach to addressing challenges in long-range information propagation.\\n2. Theoretical Rigor: The proposed framework is underpinned by solid theoretical foundations from Hamiltonian dynamics, enhancing the credibility and potential impact of the work.\\n3. Empirical Validation: The method demonstrates state-of-the-art performance on relevant benchmarks, providing empirical evidence of its effectivenes. Comparisons for the Minesweeper task from Platonov et al. (2023) were added during the rebuttal.\\n4. Broader Impact: The approach's applicability to general message-passing architectures suggests it could influence a wide range of graph-based learning tasks, contributing to advancements in the field.\\nWhile there are areas for improvement, such as expanding the scope of evaluation and providing more practical implementation details, the paper's strengths and contributions to the field warrant its acceptance.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers X6rz and 7PXw clearly suggest the acceptance of the work, find the contribution interesting and of merit to the ICLR community.\\nThey were satisfied by additional theorems and experiments that they requested during the rebuttal.\\nFurthermore, the authors engaged in a detailed discussion with Reviewer XxSd, whose concerns were largely addressed, even though their score suggests reservations. Yet, arguments for those reservations were not provided at the late stage of the discussion and I agree with Reviewers X6rz and 7PXw that the approach has sufficient merit for acceptance.\"}", "{\"summary\": \"This work provides a novel methodology for the incorporation of port-Hamiltonian dynamics in graph representation learning. The central model, called a port-Hamiltonian Deep Graph Network (PH-DGN), is introduced first in a purely conservative setting using only the Hamiltonian and no non-conservative terms. Several theorems are developed to show that this conservative case leads to long-range data propagation between graph nodes, where the graph dynamics exhibit no energy loss and gradients do not vanish as the backward sensitivity matrix is bounded below. Dissipitive forces are then added to the full port-Hamiltonian model, which are two additional networks that may be trained to capture non-conservative forces. Several experiments follow, including a showcase of energy conservation and sensitivity to empirically verify theoretical work, and a graph transfer problem and graph property prediction problem to compare performance on benchmark tasks against other graph models in settings which require long-range information propagation.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"There is very clear motivation to this work, and it builds nicely upon other references.\", \"The proposed port-Hamiltonian approach is an original and clever way to allow for non-conservative dynamics in a graph network while still maintaining long-range message passing.\", \"The theoretical results for the conservative case are strong, and the motivation and interpretation for these are presented nicely.\", \"The experimental setup is superb; the care taken to ensure ease of replication is applauded. Model details are presented very clearly and choices are explained well for each step of the setup.\", \"A strong suite of models are compared against, with many different competitors and different approaches. The consistently favorable results provide a great deal of strength to claims of the proposed method's performance.\", \"The appendices are comprehensive for both proofs and experimental setup, and made clear many of the questions I had on an initial read.\"], \"weaknesses\": [\"The majority of the theoretical results are developed for the conservative case. This makes sense in the context, as conservative long-range message passing is stated as a goal, but I would also be quite interested to see what could be proven for the fully general port-Hamiltonian case.\", \"In the explanation of Theorem 2.3, the statement that \\\"the final representation of each node retains its complete past\\\" seems somewhat strong. While I understand that the BSM result shows the influence of the entire past history on the current state, this statement as written seems to imply something stronger, and perhaps could be made more clear.\", \"The dissipitive force terms are added in some experiments to great success, but the explanations of their performance are more intuitive and are not supported by hard data in the paper. There may be a great opportunity here for visualization to support the intuitive claims.\"], \"there_are_two_very_minor_typos\": [\"In the first paragraph of Section 2, \\\"node states' in graph\\\" has an unnecessary apostrophe.\", \"In Appendix D.3, \\\"on a grid of n. layers\\\" has an unnecessary period.\"], \"questions\": [\"How would classical GCN aggregation interact with Theorem 2.4? Can the bound be easily extended for that case?\", \"In Section 3.1, you mention that the growing behavior can be controlled by regularizing the weight matrices or using normalized aggregation functions. Did you try this? How are the empirical results?\", \"Have you examined the interpretability of a trained PH-DGN? In particular, do the learned non-conservative forces make sense for the associated problem?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Thank you\", \"comment\": \"We sincerely thank the Reviewer for their thoughtful feedback and their positive evaluation of our work, as well as for the increased score.\"}", "{\"title\": \"Rebuttal Part 2\", \"comment\": \"**Regarding controlling the upper bound Appendix Th. A.1**\\n\\nWe thank the Reviewer for the question. In our experiments, we tested two aggregations schemes, i.e., the one implemented in Eq. 6 and the classical GCN aggregation. Although Theorem A.1 theoretically indicates a potential increase in the sensitivity measure, we did not observe this issue with either aggregation method during our experiments. Furthermore, we found that incorporating the GCN aggregation scheme did not consistently lead to improved performance. Hence, we did not see the necessity to include norm-constraining regularizers on the weights during training in our experiments. We recommend practitioners to treat the aggregation scheme as a hyperparameter to be selected via model selection.\\n\\n**Regarding the interpretability of a trained PH-DGN**\\n\\nWe thank the Reviewer for the comment. Our experimental suite was designed to deliberately demonstrate the long-range capabilities of our approach and, as such, we did not focus on examining the interpretability of the trained model. Interpreting the dynamics of the model is inherently challenging due to the lack of ground truth for what constitutes the \\\"true\\\" flow of information across the graph, especially on tasks like predicting 3D properties of peptides. Without this reference, it becomes difficult to disentangle how the model processes and propagates information or to verify whether it aligns with any hypothesized dynamics. This challenge is further amplified by the fact that, in our experiments, the driving forces are modeled using complex neural networks, making it difficult to deduce clear intuitions about how these forces operate within the model. These limitations highlight the need for future work to explore the interpretability of the methods.\"}", "{\"comment\": \"We thank the Reviewer for positively evaluating our experimental protocol. Regarding the potential risk of overfitting in this protocol, we kindly ask the Reviewer to elaborate further on their argument.\\nTo further clarify on our approach, as stated in previous responses, after selecting the shared hyperparameters, we are not performing an additional model selection for the driving forces. Thus, the validation set is not used to perform additional tuning, meaning that the validation set is not used multiple times for the same purpose. Our goal in this ablation is not to optimize performance but rather to investigate how different driving forces contribute to solve the task under a constant starting point, i.e., the hyperparameters selected for PH-DGN$_C$. We hope this clarification addresses your concern and further highlights the rationale behind our experimental design.\"}", "{\"comment\": \"I thank the authors for the quick responses.\\n\\n**3) Regarding the usefulness of PH-DGN on the real dataset**\\n\\nThank you for the explanation. Let me take time to consider whether the rationale is reasonable.\\n\\n----------------\\n\\n**Regarding choosing the hyperparameters of PH-DGN$_C$ in Minesweeper task**\\n\\n> Dampening and external forces do not play any role in the selection of the PH-DGN hyperparameters, since such components are not employed in the purely conservative setting.\\n\\nThank you for the explanation. I understand this point. My question was about the evaluation protocol of PH-DGN (i.e., the model with dampening and external force, which have learnable parameters.) I thought that to evaluate PH-DGN, the authors (1) first choose the hyperparameters that PH-DGN have in common with PH-DGN$_C$ using training and validation datasets, then (2) learn parameters that are specific to PH-DGN. However, since training and validation datasets are already used in the first stage, we only have the test dataset to conduct the second stage (2), which I think has the risk of information leakage.\\nLet me know if I misunderstand something.\\n\\nOther questions are OK for me. I am sorry for the short answers as the deadline is approaching.\"}", "{\"title\": \"Rebuttal Part 1\", \"comment\": \"We thank the Reviewer for the extensive feedback on our manuscript and for acknowledging that we ***carefully explained*** the knowledge behind our method and that we ***theoretically show*** and empirically validate the ***usefulness*** in the long-range regime. Below, we address each of your comments, for which we are grateful. We found them to be helpful to further improve the quality of our paper, and we hope that you are satisfied with our response. We hope that in light of our clarifications and modifications to the paper, you will consider revising your score.\\nIn particular, following the Reviewer\\u2019s suggestions, we highlight how the revised version of the paper now contains new additional theoretical guarantees on long-range propagation that also accounts for the presence of non-conservative forces, as well as new experiments to prove the usefulness of the fully conservative PH-DGN (and, of course, additional clarifications to all Reviewer\\u2019s questions).\\n\\n**1) Regarding the theoretical guarantees for the general PH-DGN**\\n\\nWe appreciate the Reviewer\\u2019s effort in improving the quality of our work. We note that the main goal of our paper is to reconcile under a single framework strong theoretical guarantees of conservation, for non-dissipative long-range propagation, with non-conservative behaviors, to potentially improve the performance on the downstream task. Indeed, without driving forces, the final ability of the system to model all complex nonlinear dynamics is restricted in real-world scenarios, as empirically shown in Table 2. To further accommodate the Reviewer\\u2019s suggestion, we derived some additional theoretical results on the effects of the port-Hamiltonian components on information propagation in Appendix B.7. In particular, we note that the sensitivity matrix can be linearly decomposed into the conservative and dissipative terms. Assuming that the driving forces and their derivatives are bounded, the self-influence of a node after one update can be constrained within fixed upper and lower bounds which are mediated (among the other) by the step-size $\\\\epsilon$. Additionally, we demonstrate that a similar upper bound applies to the influence between neighboring nodes. These results indicate that, under mild assumptions, the port-Hamiltonian components theoretically support long-range propagation.\\n\\n\\n**2) Regarding Tables 2 and 5**\\n\\nWe thank the Reviewer for the feedback. Both Tables 2 and 5 contain results for the LRGB benchmark. Due to submission length constraints, we decided to report in the main text (i.e., Table 2) only a selection of all the considered baselines while in the appendix (i.e., Table 5) we report the full list of baselines. In our revised paper, we have improved the clarity of both tables by better specifying which result comes from which paper, and whether or not the model uses positional/structural encodings. We are open to other suggestions to improve the clarity of the tables.\"}" ] }
02kZwCo0C3
SAIL: Self-improving Efficient Online Alignment of Large Language Models
[ "Mucong Ding", "Souradip Chakraborty", "Vibhu Agrawal", "Zora Che", "Chenghao Deng", "Alec Koppel", "Mengdi Wang", "Dinesh Manocha", "Amrit Singh Bedi", "Furong Huang" ]
Reinforcement Learning from Human Feedback (RLHF) is a critical method for aligning large language models (LLMs) with human preferences. However, existing offline alignment approaches, such as DPO, IPO, and SLiC, rely heavily on static datasets of human preferences, often leading to suboptimal performance. Recent efforts in the literature have moved towards online RLHF methods, but they lack a unified framework and suffer from distribution shift issues. In this work, we formalize online LLM alignment as a bilevel optimization problem. By reducing this formulation to a more computationally efficient single-level first-order method, utilizing reward-policy equivalence, we propose SAIL (Self-improving Efficient Online Alignment).SAIL generates new samples and iteratively refines model alignment through online exploration and regulation of preference labels. This enables continuous, self-improving alignment and generalizes prior online RLHF methods as special cases. Compared to state-of-the-art RLHF methods, SAIL delivers significant performance gains, with up to 11.6\% improvement in win rate and a 3.6-point increase in evaluation rewards, while maintaining low computational overhead.
[ "RLHF", "Alignment", "Online Alignment", "Self-Play" ]
Reject
https://openreview.net/pdf?id=02kZwCo0C3
https://openreview.net/forum?id=02kZwCo0C3
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uFzhESKso6", "sLPl1rDy74", "r0xTzOrbHO", "ltMqz7hL5U", "d6EdV2MJYf", "ZwEcy0FU5x", "Yqbllggrmw", "YBF0htDOcP", "QM5flvOQTS", "OQRvgef8Aj", "LptcsYSp94", "ID6MmtL62c", "FcLVLsBQIy", "F9TjZCBBKB", "E3XYUMBr1C", "DHwZxFryth", "BU6la6v4Ci", "571UoI4F4Q", "25dqYHH6wI", "1QaKMNvqWa", "0WYdN2f4Gf", "01R8mdOaXU" ], "note_type": [ "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_review", "decision", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1732371694970, 1732395397144, 1732752325077, 1732618365347, 1730490911716, 1732371368955, 1729772623071, 1732372662980, 1733083173316, 1732371761120, 1732371796553, 1732634885594, 1732371453551, 1733162266100, 1734922430195, 1732634846687, 1730696090534, 1737524191809, 1730646081261, 1732371584480, 1732556952554, 1731715465328 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_urgR" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_ZoUS" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_7i95" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_ZoUS" ], [ "ICLR.cc/2025/Conference/Submission12435/Area_Chair_uzDx" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_Rdtx" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_urgR" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ], [ "ICLR.cc/2025/Conference/Submission12435/Reviewer_Rdtx" ], [ "ICLR.cc/2025/Conference/Submission12435/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to Reviewer ZoUS (1/3)\", \"comment\": \"> Limited Exploration of Alternative Utility Functions: The method relies on the Bradley-Terry preference model, which may not be optimal for all RLHF applications. Future work could benefit from exploring alternative utility models that account for more nuanced preference data.\\nSAIL currently relies on the Bradley-Terry preference model. Have you considered experimenting with other preference models, and do you anticipate any impact on alignment performance if different utility functions are used?\\n\\n**Response:** Thank you for this insightful comment. Indeed, our current method relies on the Bradley-Terry (BT) preference model, and exploring alternative preference models is an exciting direction for future work. Since this is one of the initial works establishing a rigorous foundation for iterative RLHF, we focused on fundamental methods to clearly convey the core idea of Bilevel RLHF.\", \"our_work_reveals_a_crucial_insight\": \"the BT preference model plays a critical role in ensuring strong concavity of the lower-level problem within our bilevel optimization framework. This mathematical property enables us to derive a closed-form solution, which is key to simplifying the bilevel problem into single-level optimization using the DPO trick. However, this approach may not readily extend to more complex or non-convex preference models, as they could introduce additional optimization challenges.\\n\\nWe agree that extending the framework to accommodate alternative utility functions, particularly those capable of capturing more nuanced or domain-specific preferences, is a valuable research direction. Exploring these extensions could uncover interesting trade-offs between expressiveness, computational feasibility, and alignment performance, and we plan to address this in future work.\\n\\n> Scalability Concerns for Larger Models: Although the paper demonstrates SAIL\\u2019s effectiveness on LLMs with up to 8B parameters, additional scaling experiments would strengthen the paper's claims about computational efficiency for significantly larger models.\\nThe paper demonstrates SAIL's efficiency with models up to 8B parameters. Could you share any considerations or expected challenges for scaling SAIL to significantly larger models, such as those with over 100B parameters?\\n\\n**Response:** Thank you for this insightful question regarding the scalability of SAIL to larger models exceeding 100B parameters. We would like to share our considerations and expected challenges:\\n\\n1. **Primary Overhead Sources:** For the main SAIL methods\\u2014**SAIL-PP** and **SAIL-PR**\\u2014the major overhead compared to standard DPO comes from response generation and reward evaluation. The additional gradient terms computed (as per Equations (9) and (13)) are low-dimensional relative to the model parameters or inputs. This results in minimal time and memory overhead, even for models with over 100B parameters.\\n2. **Challenges Similar to Online RLHF Training:** Scaling SAIL to larger models involves challenges common to most online RLHF training methods. To achieve computational efficiency and enable training on machines with limited resources, we recommend using **Parameter-Efficient Fine-Tuning (PEFT)** techniques not only for training but also during generation, as we have implemented in our code.\\n3. **Technical Considerations:** There may be additional overhead when switching between training and generation modes, as well as interfacing with the reward model. Utilizing an optimized training framework that minimizes these overheads is crucial. Our current implementation adapts TRL's `DPOTrainer`, but it is not fully optimized or tested for models larger than 100B parameters. Further optimization is needed to handle the increased scale effectively.\\n\\nWe believe that with these considerations and optimizations, SAIL can be effectively scaled to significantly larger models while maintaining computational efficiency.\"}", "{\"title\": \"Response Status Update to Reviewer 7i95\", \"comment\": \"Thank you for your detailed comments. We are currently running the requested experiments and will post our complete responses with results soon. We appreciate your patience.\"}", "{\"title\": \"Looking Forward to Your Review of Our Responses\", \"comment\": \"Thank you so much for your insightful and constructive feedback on our work. We have provided detailed responses to your valuable comments, including new experimental results on AlpacaEval 2.0 length-controlled win-rates with additional model architectures, as well as the requested Arena-Hard benchmark and ARC-Challenge evaluations.\\n\\nAs we are nearing the end of the author-reviewer discussion period, we would be very grateful if you could take a moment to review our responses. We truly value your expertise and would welcome any additional thoughts or questions you may have. We are here to address any remaining concerns and continue this productive discussion.\\n\\nThank you again for your time and dedication in helping us improve our work.\"}", "{\"comment\": \"Thank you for addressing my questions. I have no further inquiries and will maintain my current rating.\"}", "{\"summary\": \"The paper introduces SAIL (Self-improving Efficient Online Alignment), an approach for online reinforcement learning from human feedback (RLHF) that aims to align large language models (LLMs) with human preferences. SAIL addresses limitations in offline RLHF methods by framing online LLM alignment as a bilevel optimization problem, which it reduces to a single-level first-order optimization method to enhance computational efficiency. The approach allows for continuous model improvement by generating samples iteratively, regulating preferences, and exploring online feedback. SAIL's self-improvement mechanism enables it to reduce reliance on preference oracles, thus allowing for more scalable alignment. Empirical evaluations demonstrate significant performance improvements over standard RLHF baselines.\", \"soundness\": \"4\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. **Innovative Formulation**: The paper provides a novel formulation of online RLHF through bilevel optimization, enhancing computational efficiency by reducing this problem to a single-level optimization, which is a significant advancement for practical LLM training.\\n2. **Effective Self-improvement Mechanism**: SAIL effectively addresses challenges related to reliance on preference oracles, making online alignment more feasible by leveraging the model's self-generated responses for iterative improvement.\\n3. **Comprehensive Evaluation**: The paper includes extensive experiments that demonstrate substantial improvements in evaluation reward, win rate, and efficiency over other methods like DPO, supporting SAIL's efficacy and computational advantage.\\n4. **Scalability and Adaptability**: SAIL\\u2019s approach to handling distribution shifts and reducing oracle reliance presents a promising method for more scalable RLHF applications, especially for emerging large-scale LLMs.\\n5. **Detailed Experiment Design and Baselines**: The experiment section is well-structured, covering a range of metrics (reward-margin, eval-reward, win rate) and configurations (SAIL-PR, SAIL-PP, SAIL-DP), providing insights into the trade-offs and performance across different setups.\", \"weaknesses\": \"1. **Limited Exploration of Alternative Utility Functions**: The method relies on the Bradley-Terry preference model, which may not be optimal for all RLHF applications. Future work could benefit from exploring alternative utility models that account for more nuanced preference data.\\n2. **Scalability Concerns for Larger Models**: Although the paper demonstrates SAIL\\u2019s effectiveness on LLMs with up to 8B parameters, additional scaling experiments would strengthen the paper's claims about computational efficiency for significantly larger models.\\n3. **Dependency on Initial Offline Dataset**: While SAIL reduces oracle dependency, it still relies on an initial offline dataset to bootstrap alignment. Further discussion on managing this dependency, especially when starting with limited labeled data, could be beneficial.\\n4. **Potential Overfitting in SAIL-DP**: The paper mentions that SAIL-DP shows signs of overfitting on in-distribution responses, suggesting that the method may benefit from more refined regularization techniques to ensure robust generalization to out-of-distribution samples.\", \"questions\": \"1. The paper demonstrates SAIL's efficiency with models up to 8B parameters. Could you share any considerations or expected challenges for scaling SAIL to significantly larger models, such as those with over 100B parameters?\\n\\n2. SAIL currently relies on the Bradley-Terry preference model. Have you considered experimenting with other preference models, and do you anticipate any impact on alignment performance if different utility functions are used?\\n\\n3. SAIL-DP seems to show some overfitting on in-distribution responses. Could you discuss any regularization techniques you considered or plans to mitigate this, particularly to enhance generalization to out-of-distribution data?\\n\\n4. Given the dependence on an initial offline dataset, how does SAIL perform in situations with minimal or noisy initial data? Are there strategies within the current framework to mitigate issues arising from a limited initial dataset?\\n\\n5. Could you provide more detail on the computational costs of SAIL, particularly in comparison with other RLHF approaches? How does the single-level optimization approach compare in terms of resource requirements, and what practical considerations should be kept in mind when implementing it?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer Rdtx (1/2)\", \"comment\": \"> As a practitioner, at least the presentation/writing wasn't clear enough to agree that SAIL provides a unified framework for those who might want to consider using online RLHF in future works. I would personally suggest adding a section explains about how one could use SAIL instead of iterative DPO methods, as well as a huge emphasis on how the provided code could be used.\\n\\n**Response:** Thank you for this valuable suggestion. We will enhance the manuscript by adding a paragraph that addresses the limitations of current online iterative RLHF methods. In the final draft, we will expand upon the following points to better articulate the significance of SAIL:\\n\\n - We will emphasize that iterative methods fail to account for interdependencies during the reward learning phase, specifically the dependency of policy-generated trajectories that result in distribution shift.\\n - To address these dependencies in a principled manner, we demonstrate the necessity of reformulating the alignment problem as a bilevel optimization problem, as expressed in equation (3).\\n - While bilevel optimization presents significant computational challenges due to its requirement for complex second-order information, making it computationally intensive.\\n - To overcome this, we leverage RLHF's special structure and the closed-form solution of the KL-regularized problem to transform it into a single-level problem without compromising generality, leading to our proposed SAIL approach.\\n - Finally, we develop a self-improvement mechanism that replaces the human-in-the-loop component by utilizing the implicit reward function as defined in equation (11).\\n\\n> There is a huge emphasis on trying to improve reward models (on RewardBench) to mitigated reward model overoptimization & train better LMs. I am curious if given a fixed budget/time limit, whether one should try to employ online RLHF methods or try to enhance reward models in general.\\n\\n**Response:** Thank you for raising this insightful point. Indeed, there has been significant emphasis on improving reward models (through initiatives like RewardBench and new VLM benchmarks, particularly from AllenAI etc.) which has successfully addressed certain issues such as length bias. While we acknowledge the value of this approach in addressing specific challenges, we believe the underlying issue is more fundamental and encompasses response quality more broadly. The effectiveness of reward models is intrinsically dependent on training with optimal or high-quality response pairs. However, this presents a significant challenge, as it necessitates training on an extensive corpus of responses to ensure comprehensive coverage.\\n\\nOur proposed bilevel optimization framework addresses this challenge by providing an efficient mechanism for concurrent training of the reward model and policy. This approach enables dynamic collection of task-relevant response pairs, resulting in more targeted and effective training.\\n\\n> I would suggest adding an explanation of what is the limitation of online RLHF methods that the paper could not address. For example, it is still unclear on what is the best practice to \\\"whether to discard instances from a preference dataset that have a subtle difference on the preference strength\\\" or \\\"would it be beneficial to employ more models when gathering responses when consisting a preference dataset\\\".\\n\\n**Response:** Thank you for this valuable suggestion regarding the limitations of online RLHF methods. We will include a comprehensive discussion of these limitations in the revised manuscript. Our theoretical insights and experimental analysis reveal an important finding: preference datasets containing diverse responses yield more informative gradients, which are essential for effective model updates. Conversely, responses with only subtle differences in preference strength generate minimal gradients, resulting in negligible improvements.\\n\\nOur work leaves several promising directions unexplored. One particularly intriguing possibility is the development of a curriculum-based approach that initially leverages diverse responses and progressively incorporates responses with closer preference values. Such an approach could optimize the learning process by capitalizing on response diversity in early stages while refining alignment as the model converges. This aligns with the natural progression we observe in model training, where response similarity tends to increase as the model approaches convergence, particularly in scenarios with low uncertainty in optimal response generation. This area represents a promising avenue for future research.\"}", "{\"summary\": \"The paper addresses the limitations of traditional reinforcement learning from human feedback (RLHF) methods for aligning large language models (LLMs) with human preferences. The authors propose a unified framework for online RLHF formulated as a bilevel optimization problem, which they simplify to a single-level method for efficiency. This approach, called SAIL, allows for continuous model improvement through online exploration and iterative refinement of preference labels, mitigating issues related to distribution shifts and reducing reliance on static preference oracles. Experimental results demonstrate significant performance gains, with SAIL outperforming state-of-the-art RLHF methods.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"(1) The paper introduces a novel unified framework for online RLHF that effectively addresses the challenges of static datasets and distribution shifts.\\n(2) By reducing a bilevel optimization problem to a single-level method, SAIL maintains theoretical benefits while significantly lowering computational costs, making it more practical for real-world applications.\\n(3) The self-improving aspect of SAIL allows models to iteratively enhance alignment without extensive supervision, addressing the challenge of needing constant access to human preference data.\\n(4) Extensive experiments validate the effectiveness of SAIL, showing substantial improvements in performance metrics compared to existing methods, thus showcasing its applicability across various datasets.\\n\\nI would consider rescoring if the authors can solve my concern.\", \"weaknesses\": \"(1) The method does not improve much in the AlpacaEval 2.0 Score. The author should give a detailed explanation. And why not use metrics like length-controlled win rate?\\n(2) Authors should compare more advanced preference optimization algorithms like ORPO and SimPO. And current results are not impressive for the alignment community.\\n(3) Why did the author just include MMLU as the downstream task metric? They should incorporate more tasks (eg., arc-challenge) like the similar self-improvement work SPIN (ICML24) to better illustrate their contribution.\\n(4) In the alignment area, it's better to conduct experiments in the Arena-Hard benchmark since it's a common metric to evaluate the alignment ability.\", \"questions\": \"See the weakness section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Status Update on Response Progress\", \"comment\": \"We have now posted detailed responses to questions that do not heavily depend on experimental validation. We are diligently working on the remaining experimental evaluations and will provide comprehensive results, along with any necessary response updates, in the coming days. We sincerely appreciate your thoughtful feedback and understanding as we work to thoroughly address all comments and strengthen our paper.\"}", "{\"title\": \"Time-Critical: Your Review of Our Responses Would Be Greatly Appreciated\", \"comment\": \"We are nearing the end of the discussion period, and we wanted to reach out once more about our detailed responses to your insightful comments. We greatly value your thorough review and have worked diligently to address each of your concerns, including conducting additional experiments on AlpacaEval 2.0, Arena-Hard benchmark, and ARC-Challenge as per your suggestions.\\n\\nYour expertise and perspective have been crucial in strengthening our work, and we would deeply appreciate if you could take a moment to review our detailed responses before the discussion period ends.\"}", "{\"title\": \"Response to Reviewer ZoUS (2/3)\", \"comment\": \"> Dependency on Initial Offline Dataset: While SAIL reduces oracle dependency, it still relies on an initial offline dataset to bootstrap alignment. Further discussion on managing this dependency, especially when starting with limited labeled data, could be beneficial.\\nGiven the dependence on an initial offline dataset, how does SAIL perform in situations with minimal or noisy initial data? Are there strategies within the current framework to mitigate issues arising from a limited initial dataset?\\n\\n**Response:** Thank you for bringing up this important consideration. While SAIL does depend on an initial offline dataset to bootstrap alignment, it requires less initial data compared to standard DPO. This is because SAIL is designed to address the suboptimality issues of offline alignment methods and to be more efficient than exact bilevel formulations.\\n\\nIn situations with minimal or noisy initial data, SAIL is better suited than standard DPO. Its reduced dependency on large amounts of high-quality data makes it more practical when starting with limited labeled data. Although mitigating issues from limited initial datasets isn't the primary motivation of our framework, this advantage allows SAIL to perform effectively even when the available data is minimal.\\n\\n> Potential Overfitting in SAIL-DP: The paper mentions that SAIL-DP shows signs of overfitting on in-distribution responses, suggesting that the method may benefit from more refined regularization techniques to ensure robust generalization to out-of-distribution samples.\\nSAIL-DP seems to show some overfitting on in-distribution responses. Could you discuss any regularization techniques you considered or plans to mitigate this, particularly to enhance generalization to out-of-distribution data?\\n\\n**Response:** Thank you for this insightful question. SAIL-DP does show signs of overfitting on in-distribution responses, as it significantly improves the Reward Margin but doesn't necessarily enhance metrics like the MT-Bench score. We hypothesize that this is due to the lack of exposure to out-of-distribution responses and offline rewards, which limits the model's ability to generalize.\\n\\nTo mitigate this and enhance generalization to out-of-distribution data, we suggest the following strategies:\\n - **Incorporate Out-of-Distribution Data:** Adding offline rewards and out-of-distribution responses to the training data can help the model learn a more generalized policy. This approach is employed in our SAIL-PR and SAIL-PP setups.\\n - **Regularization Techniques:**\\n - Data Augmentation: Augment the offline dataset by rewriting responses using other large language models (LLMs) to introduce more diversity.\\n - Label Smoothing: Apply label smoothing techniques, such as those proposed in cDPO (Mitchell et al., 2023), to reduce overconfidence in the model and mitigate the impact of noisy preference labels.\\n\\nThese strategies can help address the overfitting issue in SAIL-DP and improve its generalization to out-of-distribution samples.\"}", "{\"title\": \"Response to Reviewer ZoUS (3/3)\", \"comment\": \"> Could you provide more detail on the computational costs of SAIL, particularly in comparison with other RLHF approaches? How does the single-level optimization approach compare in terms of resource requirements, and what practical considerations should be kept in mind when implementing it?\\n\\n**Response:** Thank you for your question regarding the computational costs of SAIL compared to other RLHF approaches. Here are our insights:\\n\\n1. **Overhead Comparison with Offline DPO:** SAIL introduces no additional overhead during the model update phase compared to offline DPO. The primary overhead stems from its online nature\\u2014specifically, response generation and reward evaluation.\\n2. **Detailed Overheads of SAIL Variants:** As illustrated in Figure 5 of our paper, the overheads for the three SAIL setups vary:\\n - **SAIL-DP:** This variant incurs minimal overhead, mainly from computing additional gradient terms during backpropagation.\\n - **SAIL-PP:** In addition to the overhead in SAIL-DP, SAIL-PP includes significant overhead from generating online responses.\\n - **SAIL-PR:** Beyond the overheads in SAIL-PP, SAIL-PR also involves overhead from reward evaluation.\\n \\n By comparing the overheads of each setup, one can estimate the contribution of each component to the overall computational cost.\\n \\n3. **Resource Requirements and Practical Considerations:** Similar to other online RLHF methods, implementing SAIL requires careful management of memory resources due to the extra memory needed for online response generation and reward model evaluation. To optimize training speed, it's preferable to load all necessary models and caches into memory simultaneously to avoid the time overhead associated with frequent loading and unloading. Therefore, systems with larger memory capacity are advantageous for running SAIL efficiently.\\n4. **Implementation Guidance:** Our code provides an example implementation based on the TRL package's `DPOTrainer`. While it may not represent state-of-the-art optimization, it serves as a practical starting point. Researchers can build upon this and explore additional optimization strategies to further reduce computational costs when applying SAIL to larger models.\\n\\nWe hope this clarifies the computational considerations and practical aspects of implementing SAIL compared to other RLHF approaches.\"}", "{\"title\": \"Response to Reviewer 7i95 (2/2)\", \"comment\": \"> Why did the author just include MMLU as the downstream task metric? They should incorporate more tasks (e.g., ARC-Challenge) like the similar self-improvement work SPIN (ICML24) to better illustrate their contribution.\\n\\n**Response:** Thank you for your suggestion. Let us first explain why we did not apply many different downstream task evaluation datasets to our experiments. One reason is that we have incorporated 5 other metrics including the widely used MT-Bench scores and AlpacaEval 2.0 length-controlled win-rates. Another reason is that the UltraFeedback fine-tuning dataset we used is primarily designed to consider 4 different aspects, namely instruction-following, truthfulness, honesty, and helpfulness, and therefore it may not be very useful to improve the model's capability on reasoning datasets like MMLU and ARC-Challenge.\\n\\nNevertheless, we agree that adding more evaluation datasets would strengthen our experimental analysis. Following the reviewer's suggestion, we added the ARC-Challenge dataset as part of the evaluation and reconducted the experiments on Llama-3 (8B) and ARC-Challenge. We see similar observations as on MMLU, the SAIL methods bring larger improvements than the DPO baseline.\\n\\n| | Instr-Tuned | DPO | SAIL-PR | SAIL-PP | SAIL-DP |\\n|---------------|-------------|-------|---------|---------|---------|\\n| ARC-Challenge Accuracy | 82.2% | 82.8% | 84.1% | 83.6% | 83.4% |\\n\\nThe results show that our improvements are larger than the DPO baseline, although the baseline improvement is small.\\n\\n> In the alignment area, it's better to conduct experiments in the Arena-Hard benchmark since it's a common metric to evaluate the alignment ability.\\n\\n**Response:** Thank you for your suggestion. We agree that the Arena-Hard benchmark is recently becoming a widely used benchmark. We use the Arena-Hard-Auto repository and adapt their newly introduced Style Control (SC) method, which follows an update of Chatbot Arena. Following the reviewer's suggestion, we added the Arena-Hard benchmark as part of the evaluation, and reconducted the experiments on Llama3 (8B). The observation is similar as on MT-Bench, where we clearly see the SAIL methods can lead to significantly larger improvements than the DPO baseline. We plan to add Arena-Hard evaluations to other experiments in the manuscript soon.\\n\\n| | Instr-Tuned | DPO | SAIL-PR | SAIL-PP | SAIL-DP |\\n|-------------------------------------|-------------|------|---------|---------|---------|\\n| Arena-Hard Score (Style Controlled) | 19.8 | 23.8 | 29.4 | 26.8 | 24.9 |\"}", "{\"title\": \"Response to Reviewer Rdtx (2/2)\", \"comment\": \"> Reward margin and offline-reward evaluation is interesting by itself and could provide information of the effectiveness of the method, but I personally think is not as an important measurement as pairwise winrate. Could you elaborate on Section 6.1 why one should consider looking into it?\\n\\n**Response:** Thank you for this thoughtful feedback. While we agree that pairwise win rate represents a critical metric for response quality evaluation, reward margin and offline-reward evaluation contribute significant additional value for the following reasons:\\n - These metrics enable quantitative comparisons between our method and baselines, demonstrating the effectiveness of our RLHF algorithm. Our evaluation utilizes high-quality offline reward models provided by the dataset authors, ensuring consistent evaluation standards.\\n - Although we acknowledge the limitations inherent in using a static reward model, these metrics complement the pairwise win rate and other evaluations such as MT-Bench and MMLU. This multi-faceted approach provides a more comprehensive assessment of model performance.\\n\\nWe think this combination of metrics offers a more complete understanding of our method's capabilities and limitations.\"}", "{\"title\": \"Response to rebuttal\", \"comment\": \"Thank you for your response and addressing my concerns. I have no further questions and will keep my current rating.\"}", "{\"metareview\": \"The paper introduces SAIL, a self-improving online RLHF approach for aligning large language models (LLMs). SAIL frames online alignment as a bilevel optimization problem, reducing it to a computationally efficient single-level method. The framework enables continuous improvement by iteratively generating samples, updating preference labels, and leveraging implicit reward functions. SAIL demonstrates performance gains on benchmarks like MT-Bench and RewardBench, reducing reliance on human preference oracles.\\n\\nKey weaknesses include limited comparisons with recent methods (e.g., ORPO, SimPO), insufficient evaluation on diverse tasks, and unclear scalability and computational efficiency. The experimental setup lacks depth, with limited downstream tasks and inconsistent metrics. While the theoretical framing is novel, the practical contributions appear incremental.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers highlighted the novelty of framing online RLHF as bilevel optimization but raised concerns about evaluations, comparisons, and methodology. Authors provided additional benchmarks and clarifications, including Arena-Hard and ARC-Challenge experiments, but concerns about scalability, computational efficiency, and compatibility with other frameworks remained unresolved. Further comprehensive evaluations and theoretical discussions are needed.\"}", "{\"title\": \"Response to Reviewer 7i95 (1/2)\", \"comment\": \"> The method does not improve much in the AlpacaEval 2.0 Score. The author should give a detailed explanation. And why not use metrics like length-controlled win rate?\\n\\n**Response:** Thank you for your careful observation and question. We would like to clarify that we are already using the length-controlled (LC) AlpacaEval 2.0 win-rate metric in our evaluations. We will make this clearer in the table header of Table 3.\\n\\nRegarding the fact that the AlpacaEval 2.0 scores on LLama-3 (8B) do not improve compared to the baselines, we believe this is because our base model, the instruction-finetuned LLama-3 (8B), is already trained to perform exceptionally well in terms of helpfulness, which is the focus of the AlpacaEval benchmark. Additionally, the preference dataset we used, UltraFeedback, may not provide significant further enhancement in the helpfulness aspect. This is supported by the slight decrease observed in the AlpacaEval score for the standard DPO baseline as well (see Table 3, results on LLama-3). Therefore, we think these AlpacaEval 2.0 results on LLama-3 (8B) may not indicate that SAIL is ineffective; it may be simply caused by an ill-suited combination of base model, finetuning dataset, and evaluation benchmark.\\n\\nWe also further conducted experiments on the Zephyr (7B) model as the backbone, whose AlpacaEval 2.0 win-rate is lower. We still train on the UltraFeedback preference dataset and the other experiment setups are unchanged. In this experiment, we see a larger improvement of the SAIL method compared to the standard DPO baseline (Zephyr-7B-Beta).\\n\\n| | AlpacaEval 2.0 (LC) Win-Rate |\\n|--------------------|------------------------------|\\n| Base (Zephyr-7B-SFT-Full) | 6.4 % |\\n| DPO (Zephyr-7B-Beta) | 13.2 % |\\n| SAIL-PP | 15.9 % |\\n\\n> Authors should compare more advanced preference optimization algorithms like ORPO and SimPO. And current results are not impressive for the alignment community.\\n\\n**Response:** Thank you for raising this insightful point. We see ORPO and SimPO are two recent work which propose a different objective than the standard RLHF, and achieve remarkable improvements in terms of alignment performance and efficiency.\\n\\nOur work focus more on bringing standard RLHF to a bilevel optimization framework and propose an effective and efficient approximate algorithm on top of it. We can see some new preference optimization methods including ORPO and SimPO have one fundamental difference from our approach: they do not explicitly incorporate the KL regularization term. The absence of the KL regularization term allows these methods to optimize more aggressively for the reward function by deviating significantly from the reference model. In contrast, our approach is specifically grounded in the standard RLHF, where the KL regularization term ensures that the model remains aligned with the reference distribution while optimizing for the reward function. This distinction makes direct comparisons with ORPO or SimPO less meaningful theoretically, as those methods omit the KL regularization and adopt a fundamentally different optimization objective design.\\n\\nHowever, we think our work, although developed adhering to the standard RLHF setup, can be compatible and combined with some recent advanced preference optimization algorithms, despite their differences in optimization setups and objectives. This is because we can reformulate their alignment problem as bilevel optimization, and go through the derivation as done in the paper. Taking SimPO as an example, we can treat their reward model definition (Equation (4) in the SimPO paper) as the solution of the upper level optimization (replacing Equation (4) in our manuscript), and adopt their modified Bradley-Terry objective with reward margin (Equation (5) in the SimPO paper) to replace the standard one (Equation (10) in our manuscript). By applying these changes and rederiving the extra gradient terms, we can formulate an adaptation of our method to the SimPO objective. We will implement this combined algorithm, which adapt our methodology to the SimPO objective, and compare with the SimPO as a baseline.\\n\\nRecently many different alignment objectives and algorithms have emerged; it is an interesting question to discuss the compatibility and combination of our method with each objective. We will add more relevant discussions to the appendices, but due to the fact that the compatibility problem with each design is a non-trivial question, this process may incur considerably more work, and we hope the reviewer understands that this effort cannot be fully reflected by the rebuttal period. But we will continue to expand the discussion as the wide compatibility to other designs also strengthens our contribution to the community. We thank the reviewer for raising this insightful point.\"}", "{\"summary\": \"Compared to offline RLHF methods, online RLHF methods empirically show stronger performance, yet is computationally expensive, vulnerable to distribution shifts and lacks a unified framework. The authors ablate different online RLHF methods based on all possible combinations (namely, SAIL-PR, SAIL-PP, SAIL-DP) which could be useful for future work exploring online RLHF methods. Personally, it was surprising that SAIL-PP generally works on par or slightly better than SAIL-PR, which open up further research questions on what would be the optimal way to obtain preference dataset.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"4\", \"strengths\": [\"The authors test of two LLM-as-a-Judge benchmarks as well as on a well-established classification benchmark, and their results are consistent.\", \"The authors provide a theoretical explanation of why their method works effectively.\", \"Showing all possible combinations at Figure 2 helped understanding what kind of online RLHF methods one should consider\", \"The results are consistent across smaller models (0.5B) up to widely used scale models (8B).\"], \"weaknesses\": [\"As a practitioner, at least the presentation/writing wasn't clear enough to agree that SAIL provides a unified framework for those who might want to consider using online RLHF in future works. I would personally suggest adding a section explains about how one could use SAIL instead of iterative DPO methods, as well as a huge emphasis on how the provided code could be used.\", \"There is a huge emphasis on trying to improve reward models (on RewardBench) to mitigated reward model overoptimization & train better LMs. I am curious if given a fixed budget/time limit, whether one should try to employ online RLHF methods or try to enhance reward models in general.\", \"I would suggest adding an explanation of what is the limitation of online RLHF methods that the paper could not address. For example, it is still unclear on what is the best practice to \\\"whether to discard instances from a preference dataset that have a subtle difference on the preference strength\\\" or \\\"would it be beneficial to employ more models when gathering responses when consisting a preference dataset\\\".\"], \"questions\": [\"Reward margin and offline-reward evaluation is interesting by itself and could provide information of the effectiveness of the method, but I personally think is not as an important measurement as pairwise winrate. Could you elaborate on Section 6.1 why one should consider looking into it?\", \"Please check the questions in weaknesses as well!\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"summary\": \"The authors identify three significant challenges in online RLHF algorithms: Challenge 1: the interdependence between models and data in implicit reward learning; Challenge 2: the computational complexity of bi-level optimization; and Challenge 3: the reliance on preference oracles. They propose SAIL to address these challenges.\", \"the_main_contributions_of_the_paper_can_be_summarized_as_follows\": \"1. **Unified LLM Alignment Mathematical Framework**: The authors have designed a principled online RLHF framework that provides concrete guidance for generating new responses, assuming the existence of a preference oracle.\\n\\n2. **Adaptive Direct Preference Optimization**: By introducing a DPO-style analysis, the authors present an efficient single-layer solution capable of effectively addressing distribution shifts and providing a scalable online preference optimization method.\\n\\n3. **Introduction of a Self-Improvement Mechanism**: This mechanism reduces the reliance on preference oracles.\\n\\n4. **Extensive Experimental Evaluation**: The experiments conducted demonstrate that SAIL significantly outperforms baseline methods.\", \"soundness\": \"3\", \"presentation\": \"4\", \"contribution\": \"3\", \"strengths\": \"1. Introducing Bi-level Preference Optimization: The process of bi-level preference optimization is integrated into the modeling of online RLHF. By leveraging the unique correspondence between the reward function and the LLM policy, this approach innovatively transforms the process into an equivalent single-layer form that is easier to solve.\\n\\n2. Extensive Experiments on SAIL: Comprehensive and rich experiments were conducted to address the three significant challenges in online RLHF and to demonstrate the relevant applications of SAIL.\", \"weaknesses\": \"Regarding the three variants of the SAIL method, Table 3 shows that in the Eval-Reward and MT-bench columns, the SAIL method performs worse than the baseline DPO. Please clarify whether these experimental results undermine the assertion that the SAIL method is superior to the baseline DPO.\", \"questions\": \"There is a large amount of blank space below Section 6.1. Is there any missing content in this part of the paper?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer urgR (1/1)\", \"comment\": \"> Regarding the three variants of the SAIL method, Table 3 shows that in the Eval-Reward and MT-bench columns, the SAIL method performs worse than the baseline DPO. Please clarify whether these experimental results undermine the assertion that the SAIL method is superior to the baseline DPO.\\n\\n**Response:** Thank you for your thorough analysis of our experimental results. In Table 3, we observe that among our variants, only SAIL-DP demonstrates marginally lower performance than the baseline DPO in Eval-Reward and MT-Bench metrics. However, this observation does not affect our broader conclusions regarding the effectiveness of our two primary SAIL implementations: SAIL-PR and SAIL-PP.\", \"let_us_clarify_the_key_points\": \"- SAIL-DP employs a distinct methodology, utilizing responses from the offline dataset with self-generated preference labels. This contrasts with SAIL-PR and SAIL-PP, which generate responses online. Additionally, SAIL-DP operates with a reduced number of preference labels compared to standard DPO.\\n- While SAIL-DP shows slightly decreased performance in Eval-Reward and MT-Bench metrics, it achieves notable improvements in Reward Margin. This is particularly significant given its reduced preference label requirements and minimal computational overhead.\\n\\nThese findings support our overall conclusion regarding SAIL methods' superiority over baseline DPO. We will enhance the manuscript to better articulate the distinct characteristics and trade-offs of each SAIL variant.\\n\\n\\n> There is a large amount of blank space below Section 6.1. Is there any missing content in this part of the paper?\\n\\n**Response:** Thank you for pointing this out. The blank space below Section 6.1 is not due to missing content; it is a LaTeX formatting problem. We will address this in the updated manuscript.\"}", "{\"comment\": \"Thank you for the insightful responses. I will keep the current positive score as it is!\"}", "{\"title\": \"Status Update on Additional Experiments\", \"comment\": \"Thank you for your detailed feedback and suggestions for additional experiments. We have carefully reviewed all comments and experimental requests, and are actively conducting the requested evaluations. We will provide a comprehensive response with results soon. We greatly appreciate your constructive feedback and patience as we work to strengthen our work.\"}" ] }
02haSpO453
VILA-U: a Unified Foundation Model Integrating Visual Understanding and Generation
[ "Yecheng Wu", "Zhuoyang Zhang", "Junyu Chen", "Haotian Tang", "Dacheng Li", "Yunhao Fang", "Ligeng Zhu", "Enze Xie", "Hongxu Yin", "Li Yi", "Song Han", "Yao Lu" ]
VILA-U is a Unified foundation model that integrates Video, Image, Language understanding and generation. Traditional visual language models (VLMs) use separate modules for understanding and generating visual content, which can lead to misalignment and increased complexity. In contrast, VILA-U employs a single autoregressive next-token prediction framework for both tasks, eliminating the need for additional components like diffusion models. This approach not only simplifies the model but also achieves near state-of-the-art performance in visual language understanding and generation. The success of VILA-U is attributed to two main factors: the unified vision tower that aligns discrete visual tokens with textual inputs during pretraining, which enhances visual perception, and autoregressive image generation can achieve similar quality as diffusion models with high-quality dataset. This allows VILA-U to perform comparably to more complex models using a fully token-based autoregressive framework.
[ "Unified Visual Language Model", "Autoregressive Model" ]
Accept (Poster)
https://openreview.net/pdf?id=02haSpO453
https://openreview.net/forum?id=02haSpO453
ICLR.cc/2025/Conference
2025
{ "note_id": [ "uMPyFz62XX", "gg4i7pnNPQ", "gEwUdBl388", "cGas6kZlaM", "OxnQkdPwss", "L9rXkxDShj", "ERPUllpxWY" ], "note_type": [ "official_review", "meta_review", "decision", "official_review", "official_comment", "official_review", "official_review" ], "note_created": [ 1730093669491, 1734675564473, 1737523481494, 1730681406236, 1732517466307, 1730291386237, 1730534797789 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission2027/Reviewer_n4tc" ], [ "ICLR.cc/2025/Conference/Submission2027/Area_Chair_B5NY" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission2027/Reviewer_7Smq" ], [ "ICLR.cc/2025/Conference/Submission2027/Reviewer_n4tc" ], [ "ICLR.cc/2025/Conference/Submission2027/Reviewer_X72f" ], [ "ICLR.cc/2025/Conference/Submission2027/Reviewer_ma7u" ] ], "structured_content_str": [ "{\"summary\": \"Summary:\\n\\nVILA-U is a foundation model that unifies video, image, and language understanding and generation. Unlike traditional models that use separate components for different tasks, VILA-U simplifies this by employing a single autoregressive framework. This reduces misalignment and maintains near state-of-the-art performance in both understanding and generating visual language content. Key factors for its success include a unified vision tower that aligns visual and textual inputs, enhancing perception, and the ability to achieve high-quality image generation similar to diffusion models.\", \"contributions\": \"1. VILA-U strives for an end-to-end autoregressive model that handles both visual and textual inputs through a unified next-token prediction approach. This approach eliminates the need for external components like diffusion models, simplifying the infrastructure.\\n2. VILA-U is tested across a range of tasks, including image-language and video-language understanding, as well as image and video generation. It demonstrates notable improvements, particularly in narrowing the gap between autoregressive and continuous-token models in visual understanding, while also offering robust visual generation capabilities.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The idea of VILA-U is very straightforward, and the experiments are solid. It significantly enhances the capabilities of end-to-end autoregressive multimodal models in visual-language tasks, bridging the gap between autoregressive multimodal models and the LLAVA series, while also excelling in image generation.\\n\\n2. The structure of the VILA-U paper is simple and easy to read, and the model implementation is very easy to follow.\", \"weaknesses\": \"1.Regarding the issue of missing in context learning assessments, VILA-U has undergone extensive training on image-text sequences and can accept any interleaved layouts of images and text. Therefore, it should possess excellent contextual learning abilities. This work could be enhanced by conducting tests on its ICT capabilities.\\n\\n2.The description of the data curation process is not sufficiently clear, making it uncertain whether the data was meticulously selected or randomly chosen. If it is the former, I suspect that most of the improvements stem from high-quality data engineering rather than advancements in model architecture.\", \"questions\": \"1. The solid experimental results of VILA-U have largely reignited my confidence in the autoregressive image-text unified modeling direction. However, why is there no comparison with other text-image unified modeling models such as \\\\textbf{MM-Interleaved, SEED, and DEEM} on image understanding tasks? Ignoring the contributions of pioneers is not advisable.\\n\\n2. The video generation experiments are insufficient. Why not compare with methods like \\\\textbf{OpenSora} and \\\\textbf{CogVideoX} on \\\\textbf{VBench}?\\n\\n3. The article is unclear in its expression; are the visual tokens features directly discretized by the visual encoder, or are they encoded by a large language model? I suspect it is the former.\\n\\n4. VILA-U claims to have lower computational complexity and to avoid misalignment. While I recognize the importance of addressing misalignment, the claim of lower complexity requires experimental support. Specifically, compared to unified autoregressive image-text modeling models, using separate models like fine-tuning Stable Diffusion can also construct end-to-end autoregressive image-text modeling, which is more efficient in training and performs better. Moreover, utilizing existing mature acceleration schemes offers fast speeds. VILA-U should emphasize more on data cleansing quality and misalignment.\\n\\n5. Lastly, and most critically, I hypothesize that the structural improvements of the model provide minimal benefits compared to previous autoregressive unified models, with the majority of improvements stemming from the engineered data cleansing. For instance, MMC4-Core contains 22.4M data while MMC4 has 375M, yet some research indicates that training with these two datasets yields similar outcomes. Large-scale datasets like MMC4 are of very low quality. However, using just 6M of data to achieve excellent results suggests that your data is meticulously filtered, yet the paper lacks any detail on the core contributions of data construction. Conducting experiments on the same data with other model structures like \\\\textbf{DreamLLM} is necessary to demonstrate the efficiency of \\\\textbf{VILA-U}. \\n\\nI will improve my rating score if my concerns are addressed.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"All datasets used are public, no ethics review needed.\", \"rating\": \"6\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"VILA-U presents a unified foundation model that integrates video, image, and language understanding and generation within a single autoregressive next-token prediction framework. Unlike traditional visual language models that use separate modules for understanding and generating visual content, VILA-U employs a unified vision tower that aligns discrete visual tokens with textual inputs during pretraining to enhance visual perception. This innovative approach eliminates the need for additional components like diffusion models, simplifying the model architecture while maintaining high performance across tasks.\\n\\nThe model's effectiveness is demonstrated through comprehensive experiments across multiple benchmarks, where it achieves performance comparable to state-of-the-art specialized models in both understanding and generation tasks. Key to its success is the unified vision tower's ability to capture both semantic and appearance features through residual quantization and a combination of reconstruction and contrastive losses during training. The model also demonstrates strong in-context learning capabilities and can handle interleaved sequences of images and text effectively.\", \"additional_comments_on_reviewer_discussion\": [\"The main concerns raised by reviewers included:\", \"Reviewer 7Smq questioned the effectiveness of residual quantization and requested more details about video implementation;\", \"Reviewer ma7u asked for clarification on differences between VILA-U and other tokenization-based models like AnyGPT and SEED-LLaMa;\", \"Reviewer X72f and n4tc questioned the positioning of the work's novelty and whether improvements came from data engineering rather than architectural advances;\", \"Reviewer n4tc mainly asked about the in-context learning abilities of VILA-U;\", \"Reviewers also requested additional comparisons with models like OpenSora and CogVideoX for video generation tasks.\", \"The authors adequately addressed all these concerns during the rebuttal period, ultimately leading to four positive ratings, by (1) Providing ablation studies demonstrating the significant benefits of residual quantization over standard vector quantization, (2) Clarifying that VILA-U's unified vision tower differs from previous approaches by combining both semantic understanding and generation capabilities without requiring external models, (3) Emphasizing that no special data curation was performed and improvements came from architectural innovations like the unified vision tower and residual quantization, and (4) Adding comprehensive comparisons with additional baselines including OpenSora and CogVideoX on the VBench benchmark.\"]}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": [\"The paper presents VILA-U, a unified model for language, image and video understanding + generation\", \"The model is trained with an autoregressive next token prediction loss for all tasks\", \"The paper explores vision encoder choices to ensure understanding and generation performance\"], \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"The paper's most interesting contribution is the unified vision tower exploration to unify generation and understanding and the appropriate ways to train such an encoder\", \"The approach is quite straightforward and the application of RQ-VAE allows for token efficiency while preserving more information\", \"VILA-U is close to SOTA on visual understanding tasks (image and video) with comparable models\", \"The model also fares well on image generation tasks and comes close to diffusion models\"], \"weaknesses\": [\"The method chooses RQ-VAE for efficiency, but there isn't a discussion / results around this. How would the results look if the vision tower didn't use RQ-VAE? How important is the RQ-VAE?\", \"The generated images are relatively low-resolution (256 or 384px), especially since the RQ-VAE allows for increased efficiency in tokens\", \"The paper doesn't really discuss video implementation details. Video understanding and generation have a mismatch in FPS / durations they usually support, what does VILA-U support? There isn't a discussion around this.\", \"The paper claims to support video generation, but there are no quantitative results around this. The two qualitative examples are also very simplistic in Figure 7.\"], \"questions\": [\"Please share missing details as mentioned in the weaknesses\", \"What are the number of image and video tokens going into the LLM? How many tokens are processed by the RQ-transformer and what is its size (the RQ-VAE paper has multiple different settings)?\", \"It would be interesting to see if the vision tower training results hold for a general VAE setup instead of an RQ-VAE since that would make the results even more broadly applicable\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Sorry for the late reply. Thank the author for the detailed rebuttals. My main concerns have been addressed, so I increase my score to 6. Looking forward for you open-sourced codebase and models. Please add missing references about DEEM, SEED, and MM-Interleaved.\"}", "{\"summary\": \"The paper, VILA-U presents a unified framework of autoregressive multimodal generation and understanding. It achieves this by first training a vision encoder (discretized via RQ codebook) for text-conditioned image tokens (initialized from CLIP) and then training image+text data using autoregressive modeling. It presents a complete training recipe for creating autoregressive multimodal models, and the resulting model is benchmarked against a wide range of existing models across tasks (generation and understanding)\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The unification of multiple modalities in the same architecture (with the same training objective) is a very important topic. The paper is a valuable contribution to this overall research program. In the current work, the choice of quantized image tokens for image representation makes the autoregressive modeling task more natural as the image modality is tokenized into discrete tokens much like language. This helps minimizes the amount of code development required for adapting existing LLM code bases to their multimodal counterparts.\\n2. The paper performed fairly complete evaluations (image-text, video-text, text-image, ) and ablation studies that include model backbone and training objective.\", \"weaknesses\": \"1. It is not clear to me how to position the work in its novelty or effectiveness and this may be addressable with some rewriting. I see 3 potential angles\\n 1. Training effectiveness by leveraging pretrained networks. The authors motivates the work by emphasizing that existing methods that attempt to unify multimodal generation and understanding either require significant architectural modifications to their uni-modal counterparts, or training from scratch. However, this comparison seems not to play a central role in the subsequent discussions. If the effectiveness of the proposed method is reflected in ease of training, then readers would expect to see comparison of training time/compute for comparable performances. \\n 2. Effective token representation of image modality as discrete tokens: VILA-U differs from prior work in its adoption of RQ-VAE embedding for images. However, if this is the main innovation, the choice of RQ, its superiority over alternative methods, the important of discontinuous embedding of images (as compared to, for example, continuous embedding as in LaViT) will need to be elevated.\\n 3. State-of-the-art performance: If the main contribution is instead just the shear effectiveness of the method. Then it should demonstrate this quantitative in the paper. Unfortunately, the comparison tables doesn\\u2019t seem to suggest that the VILA-U model is the state-of-the-art in most benchmarks. Perhaps it achieves Pareto frontier between understanding and generation tasks? Or outperforms other models for the same training compute/time? Either way I\\u2019m not clear what the main advantage of the current work is over others. \\n2. The discussion around training recipe is very important and useful for practitioners. However, it lacks both quantitative and qualitative (with examples) comparisons of the different training recipes. With the conclusion seems to be use an aligned CLIP model for image encoder initialization, which doesn\\u2019t seem to be a novel finding. I would recommend either supporting the discussion with more evaluation (quantitive or qualitative, ideally both) or moving the discussion to the appendix.\\n3. The paper suffers from unsubstantiated claims ( neither references nor experimental support). I've highlighted a few statements that are very important for the message in the paper below:\\n - \\\"replacing continuous tokens with VQ tokens in VLMs usually results in a severe performance drop\\\"\\n - \\\"A straightforward combination of contrastive and reconstruction loss cannot converge\\\"\\n - \\\"both the rFID and Top-1 accuracy of the vision tower only serves as a medium indicator instead of directly linearly correlated to the final performance of our whole multi-modal framework.\\\"\", \"questions\": \"My biggest suggestion/question is related to the number 1 weakness described above. If the author could highlight the main contribution of the work that would make its positioning much easier. One positioning that was left out in the weakness section above is to position the work as the \\\"first\\\" in some regards. However, while autoregressive modeling of text + language is a burgeoning field, VILA-U is not the first model that performs autoregressive modeling of multiple modalities.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents VILA-U, a unified foundation model for visual understanding and generation that integrates image and language processing into a single autoregressive next-token prediction framework. Unlike traditional visual language models that rely on separate modules or diffusion models for generation, VILA-U employs a unified vision tower to discretize visual inputs, aligning them with textual tokens through contrastive learning. From the experiments, the authors show that VILA-U can achieve state-of-the-art performance in both image generation and comprehension.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. VILA-U introduces a unified framework that handles both visual understanding and generation in a single autoregressive next-token prediction model.\\n\\n2. The model leverages a unified vision tower that uses contrastive learning to align discrete visual tokens with textual inputs, which enhances the model's visual perception and text-visual alignment capabilities.\\n\\n3. The experiments indicate the state-of-the-art performance of VILA-U in both image generation and understanding.\", \"weaknesses\": \"1. Missing the clarification between VILA-U and other tokenization-based multimodal models, like AnyGPT [1] and SEED-LLaMa [2]. Those models also used visual tokenizers to discrete the images and trained with causal language modeling loss. I noticed the authors cite the SEED-LLaMa in the line 102, but the claim of \\u201cIn this work, we design our framework based on the autoregressive next-token prediction method for visual generation and make our VLM learn to generate visual content effectively.\\u201d does not the main difference between VILA-U and SEED-LLaMa.\\n\\n2. One of the claimed contributions of this paper is about proposing the training strategy for the unified foundation vision tower. However, the training strategy seems similar to SEED [3], which also used contrastive loss between image embeddings and text embeddings. Can authors clarify the difference between the unified foundation vision tower and SEED?\\n\\n3. Comparisons with other tokenization-based multimodal models [1,2] and Emu2 [4] are missing.\\n\\n4. The limitation section, which is required, is missing.\\n\\n[1] Zhan, Jun, et al. \\\"Anygpt: Unified multimodal llm with discrete sequence modeling.\\\" arXiv preprint arXiv:2402.12226 (2024).\\n\\n[2] Ge, Yuying, et al. \\\"Making llama see and draw with seed tokenizer.\\\" arXiv preprint arXiv:2310.01218 (2023).\\n\\n[3] Ge, Yuying, et al. \\\"Planting a seed of vision in large language model.\\\" arXiv preprint arXiv:2307.08041 (2023).\\n\\n[4] Sun, Quan, et al. \\\"Generative multimodal models are in-context learners.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.\", \"questions\": \"Please refer to the weaknesses section.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
02Od16GFRW
Ensembles provably learn equivariance through data augmentation
[ "Oskar Nordenfors", "Axel Flinth" ]
Recently, it was proved that group equivariance emerges in ensembles of neural networks as the result of full augmentation in the limit of infinitely wide neural networks (neural tangent kernel limit). In this paper, we extend this result significantly. We provide a proof that this emergence does not depend on the neural tangent kernel limit at all. We also consider stochastic settings, and furthermore general architectures. For the latter, we provide a simple sufficient condition on the relation between the architecture and the action of the group for our results to hold. We validate our findings through simple numeric experiments.
[ "equivariance", "invariance", "ensemble models", "data augmentation", "SGD" ]
Reject
https://openreview.net/pdf?id=02Od16GFRW
https://openreview.net/forum?id=02Od16GFRW
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zqMFzNLabE", "p57cKHF38N", "iJ0f5MQ97z", "gRu2UBNEQL", "g0WeyMOGGc", "e27HEbI0AF", "aFqJ817RpX", "VYde92JmPN", "VVQSMcZxgP", "TlcGqugBRP", "QtZIIxQssu", "LeCUJGZHEZ", "JiDmgDtNHX", "I3eiVnCVfC", "HKJJNQ1JKw", "5gHcJIEzFj", "5M1yC2n4Sl" ], "note_type": [ "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_review", "official_review", "official_comment" ], "note_created": [ 1732535249280, 1730497417502, 1732031597237, 1732737691728, 1732031576659, 1732470978606, 1732736818668, 1737523655858, 1732549199100, 1732693338590, 1732031711186, 1732549293504, 1732031644247, 1734665641577, 1730600353980, 1730565560048, 1732031587812 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission4689/Reviewer_nmuK" ], [ "ICLR.cc/2025/Conference/Submission4689/Reviewer_nmuK" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Reviewer_YfbU" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Reviewer_Ljyp" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ], [ "ICLR.cc/2025/Conference/Submission4689/Area_Chair_h1fU" ], [ "ICLR.cc/2025/Conference/Submission4689/Reviewer_YfbU" ], [ "ICLR.cc/2025/Conference/Submission4689/Reviewer_Ljyp" ], [ "ICLR.cc/2025/Conference/Submission4689/Authors" ] ], "structured_content_str": [ "{\"title\": \"Response to rebuttal\", \"comment\": \"I would like to thank the authors for their response. I have gone through the reviews and the authors' responses. I believe my main concerns regarding novelty and empirical evaluation remain, and so I will keep my initial score.\"}", "{\"summary\": \"The paper presents a theoretical analysis showing that data augmentation can lead to equivariance in deep ensembles. The paper's main result is that under several assumptions (e.g. on initialization, architecture, etc.), deep ensembles trained with data augmentation are equivariant in mean, even when individual models are generally not. A similar result was previously presented, but the paper extends these previous results, which were primarily focused on infinitely wide NNs trained with gradient descent under full augmentation, to ensembles of finite-width trained with SGD and random augmentation.\\nThe paper is mainly theoretical and validates the theoretical results through limited and small-scale empirical experiments.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper is well-structured and easy to follow.\\n1. The paper extends previous results to more reasonable and applicable settings. This is a significant extension.\", \"weaknesses\": \"I like the paper and believe it has a sufficient contribution and interesting results. However, there are several limitations stated below:\\n\\n1. While the assumptions for the theoretical analysis are more applicable compared to previous works, they still hold only for infinite-size ensembles. Any analysis (including empirical) on the error bounds for finite ensembles would be beneficial.\\n1. While the results are important, the novelty is somewhat moderate in the sense that the emergent equivariance property of ensembles was previously proposed and the fact that the theoretical analysis heavily relies on previous works [1].\\n1. From the empirical evidence, it is unclear if some of the assumptions (like symmetric initialization) are indeed necessary. The authors discuss this, but I believe it can be extended further.\\n1. Empirical evaluation is limited. It would be beneficial to extend it to more settings, even by small modifications like considering cyclic groups C_k of different orders (k), different architectures, model sizes, etc.\\n1. It would be beneficial to see the impact of ensemble size on the metrics in Table 1, like adding a line plot for ensemble size vs. OSP. The authors show results for different sizes, but summarizing them in one clear view would make it easier to follow.\\n1. The paper could benefit from a clearer and more explicit discussion of the limitations of the results.\\n1. Minor:\\n - Line 37: \\u201c... a definitive question to the question\\u2026\\u201d.\\n\\nReference\\n\\n[1] Flinth & Ohlsson, Optimization Dynamics of Equivariant and Augmented Neural Networks, 2023.\", \"questions\": \"1. Why does the OSP not increase at initialization when ensemble size increases?\\n1. From the figures, it seems like the results could improve with more epochs (also for baselines). Could you please provide results with a larger number of epochs?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their constructive criticism. We are happy to hear that the reviewer finds our results to be of interest to the research community.\\n\\n### Presentation of results in main body of text\\n\\nWe agree with reviewer that the main result which is proved in the main text is not as interesting as the result which is proved in Appendix B (the result in the appendix is stronger). Note however that both results are entirely novel, as far as we could tell. Our reasoning for laying out the text as we do is that snce the two proofs follow essentially the same outline, presenting the simpler result in the main text is more pedagogical. That is, the version of our main theorem which could be proved by using the results on equivariant flows from K\\u00f6hler et al. is presented in the main text precisely *because* it is simpler - less energy is put on the technical details and more on the conceptual.\\n\\n### On the notation of the affine space $\\\\mathcal{L}$\\n\\nThe point here is simply that $\\\\mathcal{L}$ is an affine space (linear manifold) and not a linear space (vector space), that is, it can be described as a base point + the tangent space, which in this case is the parallel space going through the origin. Hopefully this clarifies the notation. We could of course choose another terminology for $\\\\mathrm{T}\\\\mathcal{L}$, such as 'parallel space', or the like, but we think that 'tangent space' is the clearest one.\\n\\nThe reason we consider this as the space of linear layers is simply to include more potential architectures into our analysis.\\n\\n### The results in Table 1\\n\\nThe results in Table 1 are in line with the theory we have developed. Since the space of convolutions with asymmetric filters (the asymmetric case) is not invariant under the action of the group, our results no longer guarantee the emergence of equivariance, even though they are invariantly distributed at initialization.\\n\\nIt should be noted that the results for the asymmetric model are also quite close to equivariant, which naturally leads to the question if the sufficient condition we have in our theorem is a necessary one or not. In the paper we hypothesize that it may have to do with the fact that the energy of the asymmetric part of the filters is small, so that the asymmetric filters are approximately symmetric in some sense. In Appendix E, we compare what happens in the case of $5\\\\times 5$ filters and we see that the gap between the symmetric and asymmetric indeed grows when the energy of the asymmetric part is increased.\"}", "{\"title\": \"Updated version of the pdf\", \"comment\": \"We have now uploaded an updated version of the pdf. All changes are marked in blue.\\n\\nIn short, the added material is more or less what was posted in our last comment. We have also made some other minor changes, such as changing the terminology \\\"tangent space\\\" and correcting typos.\\n\\nIn a final version of the paper, we will redo the failed experiments for C16 with BILINEAR interpolation, and make an experiment with standard CNN:s for the NEAREST interpolation. We apologize for not being able to do so before the deadline for pdf updates.\\n\\nWe would like to thank the reviewers again their suggested changes. We think that they have improved the paper.\"}", "{\"title\": \"Planned updates\", \"comment\": \"We would like to thank all the reviewers for their work. Their reviews are all insightful, and contain many valuable suggestion. We have responded to their questions and comments in individual posts.\\n\\nLet us here only advertize the two big updates we will make to the manuscript before the end of the discussion period.\\n\\n* We will perform a new set of experiments for the C16 group.\\n\\n* We will make a more serious evaluation of our models also for smaller ensemble sizes, providing some empirical results in this direction. Here is already a plot formed by measuring the metrics at epoch 10 for the different models (shown is a mean of 10 bootstrapped ensemble samples for each size and model) : [Plot](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/complete_plot.png?v=5df1ae3f) -- the general trend is that the difference in equivariance is detectable already for moderate ensemble sizes. In the updated version of the paper, we will provide data for 30 bootstraps, and perform some statistical tests. See also the comment to reviewer nmuK.\\n\\nWe will try to make the updates as soon as possible.\"}", "{\"comment\": \"I thank the author for their response and the additional plots. I have gone through the author's response as well as the other reviews. Unfortunately, my concerns about the usefulness (theoretical/empirical)/non-trivialness of the theory remain. Moreover, the experiments are not convincing enough to make a case for the theory (e.g., the symmetry component important in theory seems to have minimal empirical impact).\\n\\nI look forward to more experiments the authors have promised in their global response. If there is a way to connect the theory and experiments better or provide more use cases (theoretical/empirical), I would be happy to increase my score. But currently, unfortunately, I am unable to do so.\"}", "{\"comment\": \"Thank you again.\\n\\nWe completely agree that readability of papers is very important. We have changed the word 'tangent space' to 'direction', as used on for example Wikipedia, Planetmath and in Geometric Methods and Applications for computer science and engineering, J. Gallier, Springer, 2011.\\n\\nAs for the disposition of the text, we completely understand and respect the reviewer's opinion. It would be possible to write the paper only concentrating on the more technically involved version of Theorem 4.2. However, we feel the need to point out that we do not present any theorems in the appendix which are not at least clearly advertized in the main text. Appendix B is only containing lemmas used in the proof of one of the versions of Theorem 4.2, and the proof of that version. Note that the theorem in the main text mentions the case of training with SGD using random augmentation.\\nWe agree that the theorem formulated and proven in appendix C is only mentioned in passing in the main text. However, the statement of the theorem is still there, albeit not in a theorem environment. Note that precisely stating the result is quite involved, and the result is not needed to prove our main result.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Results of updated experiments (I)\", \"comment\": \"Dear reviewers,\\n\\nwe have had some technical issues, but have now finally managed to run our updated experiments. The results are interesting, and not as clear-cut as one could have wished for. Still, we think that they support the relevance of our theory rather than speak against it.\\n\\nFirst, we have, as advertized, re-evaluated our previous experiments (i.e., for $C_4$) for 30 bootstraps instead of 10 bootstraps per sample size. There are no surprises here: the symmetric architecture still outperforms the asymmetric ones, and does so with statistical significance ($p<.001$) from $250$ ensemble members onwards (with respect to the divergence metric, even from $75$ ensemble members). Here is an updated plot: [plot_C4](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/C4_nearest.png?v=b54e4155)\\n\\nWe have also run the same experiment for the bigger group $C_{16}$. Let us first note that when using this group, we stray from the setting in the paper. The group is no longer acting directly on the support of the images - due to interpolation effects. Hence, the lifted representation $\\\\rho$ on the linear layers $A_i$ no longer perfectly corresponds to rotation of the filters $\\\\varphi_i$ (Example 3.2 is no longer valid). In fact, again due to interpolation, Assumption 1 and 3 are also not given and the spaces $\\\\mathcal{L}$ are no longer invariant.\\n\\nWith this said, here is the plot for our experiments: [plot_C16](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/C16_nearest.png?v=d3772c9f). \\n\\nWe see that while the symmetric filters still produce more equivariant ensembles than the asymmetric ones on the in-distribution MNIST test data, they are actually not better, and even worse with respect to the divergence metric, on the CIFAR10 data. The most striking difference to the $C_4$ experiments are however that the all of the models are significantly less equivariant on the CIFAR10 data. This was not what was expected. One realizes that this might have to do with the way we have performed our augmentation: We have used the default 'nearest' interpolation option in torchvision to perform the augmentation, and also making sure that the background of the images are uniform. These transformations are not a representation of the group $C_{16}$ -- if we in particular think about filters of size $3\\\\times 3$, the small rotations in fact do nothing.\"}", "{\"title\": \"Response\", \"comment\": \"Thank you for your comment. I think the paper has scientific merit, which is why I gave it a score above the acceptance threshold. However, the way the paper is written is important and effects is score.\\n\\nAs it is, the main results that make this paper worthwhile are in the supplementary material, not just the proofs but the theorems themselves. This means that the average reader won't even know they exist. Note that as a reviewer \\\"It is not necessary to read supplementary material\\\" making a clear distinction between the main paper and supplementary material.\\n\\nSecond, while tangent space might be clearer to some, it requires prior knowledge of differential manifolds. This isn't always the case in the general ML community, as this isn't part of the standard mathematical tools used. As such, adding this without any reason when a simple linear algebra term would suffice is something that I think is problematic as it makes the paper less accessible for no valid reason.\"}", "{\"title\": \"Answers to questions\", \"comment\": \"### OSP at initialization\\nLet us first state that we do not think that this goes against our theory. Instead, we think that this is essentially what is going on: Before training, the predictions of the networks should be more or less random -- that is, the predictions are independent of the data, and rather only are different due to different draws of the parameters at initialization. Thus, the infinite-member ensembles should more or less, for each datum $x$, give one of the 10 classes completely at random. Note that the latter will almost be true also for finite-sample ensembles. Each rotated version hence has a one in ten chance of being the same as the the non-rotation examples, and the expected value of the OSP is $1.3$, which indeed seems to be approximately the OSP of the big ensembles at initialization.\\n\\nA shorter answer is that this is due to the $\\\\mathrm{argmax}$ function, which is used to determine the predictions, being discontinuous. Note in particular that the KL-divergence-metric is getting smaller when we compare then at 10,100 and 1000 ensemble members (see appendix), so that the ensembles get more and more equivariant at initialization with growing ensemble sizes.\\n\\n### Longer training\\nWe agree that it seems that longer training definitely could lead to more equivariant ensembles. We will however not make any experiments for this, and instead prioritize the C16-experiments. A continuing trend of more and more equivariant ensembles would, as we see it, not say *that* much in this context - the fact that the symmetric ensembles converge faster will still provide the same support to our theory as before. We deem whether the trends continue on another group, where the assumptions are not met in the same clean manner as for C4, a more interesting question, and will therefore prioritize them. We hope the reviewer understands this decision.\"}", "{\"title\": \"Results of updated experiments (II)\", \"comment\": \"We therefore repeated our experiments with the 'bilinear' interpolation option. This is also not a representation of the group in a formal manner, but is at least closer to one -- the action of the small rotations is no longer trivial on small filters, for instance. Here is the plot for those results: [plot_C16_bilinear](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/C16_bilinear.png?v=23256f1b)\\n\\nWe see that our models now become less invariant on the MNIST data, but more invariant on the CIFAR data. The former can be explained with the fact that the bilinear interpolation will produce images that are blurrier, and also result in a non-uniform background -- the dataset hence becomes more diverse, and it will be a harder problem to learn it. The models can hence not rely on simply learning to perform well on the dataset to become equivariant, as seemed to be enough in the case of using the 'nearest interpolation'.\\n\\nThe still bad performance of the symmetric and asymmetric models is in fact explained by our main result! When rotating an $\\\\mathcal{L}^{\\\\mathrm{sym}}$-filter with $\\\\pi/4$, we will approximately end up with a filter with only non-zero elements on the corners. This is very far from being a $\\\\mathcal{L}^{\\\\mathrm{sym}}$ filter -- the invariance condition is hence far from being satisfied. The asymmetric support for some reason performs slightly better -- we could speculate on why, but ultimately, they perform badly, as would be predicted by the fact that space $\\\\mathcal{L}^{\\\\mathrm{asym}}$ is asymmetric. When repeating the experiments with standard $3\\\\times 3$-filters, something different happens though -- as can be seen in the plot, they vastly outperform the non-standard filters. The corresponding subspace $\\\\mathcal{L}^{\\\\mathrm{cnn}}$ is still not perfectly invariant to non-$\\\\pi/2$-rotations - and also do not yield perfectly equivariant ensembles -- but they are definitely 'more' invariant than both the non-standard filter supports considered -- a rotated $3\\\\times 3$-filter will 'bleed' somewhat, but not as extremely as a $C_4$-symmetric filter.\\n\\nFor full disclosure, we should mention that the bootstraps in the final plot for the non-standard filters are only over approximately 900 total ensemble members -- due to technical difficulties, not all 1000 members finished their training. This will be fixed in a final version. We should course also repeat the CNN experiments also for the nearest interpolation - we will do so in the final version, but already want to report the results we have now for the reviewers to consider.\\n\\nAll in all, we believe that this new set of experiments speaks *in favour* of the practical importance of our theory. Our experiments indicate that in situations where the compatibility condition is not satisfied, the augmentation will *not* lead to equivariant ensembles by itself! One can also note that this aspect of the theory is not at all present in Gerken and Kessel - and only somewhat tangentially in Nordenfors, Ohlsson and Flinth. In this spirit, we thank the reviewers very much for suggesting these experiments.\\n\\nWe understand that we are very close to the end of the discussion period, and understand that the reviewers have already put a lot of effort into reviewing our work, but still hope that they can take the time to consider also these last-minute developments.\"}", "{\"title\": \"Comments on weaknesses\", \"comment\": \"We thank for the constructive review. We are happy to hear that the reviewer thinks that our paper is easy to follow, and that our extension makes the results applicable in more reasonable settings compared to previous results.\\n\\nAll of the points the reviewer makes are valid, as are the suggestions. Let us in the following comment on each on the weaknesses and the questions.\\n\\n### Infinite vs finite size ensembles\\nIt is a reasonable suggestion to include more results about ensembles of finite size. It should be noted that we already have some plots related to the importance of ensemble size in the appendix. We agree that these are somewhat hard to interpret. We have therefore chosen to redo the evaluations, to include more sizes. We have at the time of writing of this rebuttal built new sub-ensembles from our trained models for more ensemble sizes, and measured each of our metrics for the resulting models at epoch 10.\\n\\nUsing a simple t-test on 10 (bootstrapped) samples per size and model, we can confirm with statistical significance (p<.005) that with respect to the KL-divergence, \\n\\n* $\\\\mathcal{L}^{\\\\mathrm{sym}}$-ensembles are more equivariant than the $\\\\mathcal{L}^{\\\\mathrm{assym}}$ with symmetric initialization for ensemble sizes bigger than or equal to 75 on MNIST, and bigger than 100 on CIFAR.\\n\\n* $\\\\mathcal{L}^{\\\\mathrm{sym}}$-ensembles are more equivariant than the $\\\\mathcal{L}^{\\\\mathrm{assym}}$ with asymmetric initialization for ensemble sizes bigger than or equal to 25 on MNIST, and bigger than 25 on CIFAR.\\n\\nSee also the following plot (also showcasing OSP) : [Plot](https://anonymous.4open.science/api/repo/ensemble_experiment-1B83/file/graphics/complete_plot.png?v=5df1ae3f)\\n\\nIn the updated version of the paper, we will present data for 30 bootstrapped examples (a setting in which a t-test makes more sense) on all metrics. We can already now conclude that the difference in performance between the different versions is present already for moderate ensemble sizes.\\n\\n### Novelty\\nWe understand and respect the reviewer's point, but hope that they can also agree that the results from the different papers have been put together in a non-trivial way to produce new, meaningful results.\\n\\n### Necessary vs. sufficient conditions\\nWe have indeed only proven sufficient conditions, and we agree that this can be made clearer in the text. We however genuinely believe that proving more than we have already done goes beyond the scope of this work - significantly new ideas need to be applied to obtain a result about convergence towards, rather invariance of, the symmetrical models.\\n\\n### More groups\\nThe reason for only testing the C4 group is that we there have a clean example of where our results apply. When going over to rotation groups of higher order, one starts to need to interpolate, and the invariance condition will not be as clear cut as before.\\nWe however agree that it is beneficial to also perform experiments in a more 'dirty' setting as far as our theory concerns, since this will provide more information about its practical relevance. We will make one other rounds of experiments, for C16. This will take some time to setup and evaluate, whence we cannot report on results now - we will do this as soon as possible.\\n\\n### Limitations\\nIt is a reasonable suggestion to include a compilation of the limitations in order to increase the readability of the paper. We will do so in an updated version of the paper. As we see it, our main limitations are\\n* Our condition is sufficient rather than necessary\\n* Our guarantee is only about the infinite-member limit of ensembles.\\n\\nWe can also remark that the following are things we *speculate* on, but *haven't* proven:\\n* The extent to which $\\\\Pi_{\\\\mathcal{L}}$ and $\\\\Pi_G$ commute is indicative of emergent equivariance - the smaller it is, the more equivariant the ensembles should be.\\n* The set of symmetric models may be an attractor of the dynamics, and not only stable.\"}", "{\"metareview\": \"This paper studies how equivariance emerges in ensembles of neural networks trained with data augmentation. The authors extend prior theoretical results by showing that equivariance holds under weaker conditions that exist in prior work (e.g. without requiring the NTK limit).\\nThe reviewers appreciated the clear writing and sound theoretical development. However, several concerns were raised about both the theory and experiments: The theoretical contribution it appears to be minor, and the empirical validation was quite limited in scope. The was also some apparent disconnect between the theory predictions and experimental results, which brought into question the relevance of the theory.\\nThe authors engaged constructively with reviewers in the rebuttals. \\nHowever, the concerns remain significant enough to prevent me from recommending acceptance.\", \"additional_comments_on_reviewer_discussion\": \"See above.\"}", "{\"summary\": \"This paper shows that an ensemble of models when trained with data augmentation leads to emergence of equivariance properties naturally. The results generalize over past known results based on NTKs. The theory assumes some basic assumptions on the architecture and shows that, when the initialization of the weights in an architecture has some symmetry, then, the expected architecture of the ensemble is equivariant. Experimental results with various ensembles validates the results for the C4 group of symmetries.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The work show the emergence of equivariant in ensemble models\", \"The work generalizes previous works where the proof relied on NTKs\", \"Experiments with large ensemble of models show the emergence of equivariance\"], \"weaknesses\": \"I have several concerns over the usefulness of the theory and the experimental results.\", \"usefulness_of_theory\": [\"What is the use of the theory in model design or practical use cases? Since equivariant models seems to give perfect equivariance and data augmentation techniques give approximate equivariance. So, I am wondering what is the use of ensemble technique for symmetries, especially, given that we need over 1000 models to get good equivariant results.\", \"What are the advantages of the proposed technique compared to existing symmetrization and canonicalization methods [1-4] that can convert non-equivariant models into equivariant ones using techniques somewhat similar to ensemble methods but with additional transformations that looks similar to augmentation.\"], \"experimental_results\": \"- Although the experimental does show that the architecture with symmetric support does give invariant output, but even the asymmetric architecture seems to be giving invariant output, questioning the usefulness of the theory. It is also discussed in the paper about the symmetric states being attractors potentially, but, it still makes the current theory not very useful.\\n- Experiments are only shown for C4 symmetries\\n\\n[1] Basu, Sourya, et al. \\\"Equi-tuning: Group equivariant fine-tuning of pretrained models.\\\"\\u00a0Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 37. No. 6. 2023.\\n\\n[2] Mondal, Arnab Kumar, et al. \\\"Equivariant adaptation of large pretrained models.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a036 (2023): 50293-50309.\\n\\n[3] Basu, Sourya, et al. \\\"Efficient equivariant transfer learning from pretrained models.\\\"\\u00a0Advances in Neural Information Processing Systems\\u00a036 (2024).\\n\\n[4] Kaba, S\\u00e9kou-Oumar, et al. \\\"Equivariance with learned canonicalization functions.\\\"\\u00a0International Conference on Machine Learning. PMLR, 2023.\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper expands the results of Gerken & Kessel that show that data augmentation produces equivariant ensembles of models using NTK, by looking at finite network sizes. They then show empirically that their theoretical results indeed hold in practice (up to sampling errors).\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"It generalizes the results in Gerken & Kessel\", \"The topic of invariance/equivariance is important so these results would be of interest to people in that community\"], \"weaknesses\": [\"My main issue is with the writing:\", \"The results presented in the main text are quite trivial, that if you start with an invariant distribution and use an invariant flow you end up with an invariant distribution. The more interesting results are in the appendix (appendix B and C)\", \"You writing $\\\\mathcal{L} = A_\\\\mathcal{L} + T\\\\mathcal{L}$ with $T\\\\mathcal{L}$ the tangent space is very confusing, as tangent space is defined for a manifold and we are talking about a linear space. It needlessly complicates things as there is no need to involve differential geometry when we are working on linear spaces.\"], \"questions\": \"The results in Table 1 aren't that clear to me. In the asymmetric case where you have a symmetric initialization, shouldn't you get results that are similar to the symmetric case? Yet there is a large gap\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"We thank the reviewer for their constructive criticism. We also understand the reviewer's concerns regarding the applicability of the theoretical developments to model design. However, we hope that we can convince the reviewer of the importance of the theoretical developments regardless of their immediate applicability.\", \"the_general_question_that_motivated_this_paper_is\": \"\\\"Does data augmentation lead to equivariance?\\\" The technique of data augmentation has been used for a long time in order to align models with various operations, that is, to make them more robust. There is however little in the way of theoretical guarantees of this observed property of data augmentation. In our case we restrict ourselves to studying alignment with symmetries, that is, to emergent group equivariance from data augmentation. In this context, our theoretical results can be viewed as a partial answer to the general question that motivates our research.\\n\\n### Usefulness of theory\\n\\nIn our paper, the objective is to show that when training ensembles of networks from scratch under data augmentation, there is an emergent equivariance coming from the optimization process itself. On the other hand, in the papers [1-4], the goal is to develop methods for making a pre-trained model equivariant. In papers [1,3], this is done by averaging the model over the orbit under the group action. This differs from ensembling as considered in out paper, since we average over initializations and random draws of group elements during training. In papers [2,4], it is done by precomposing the model with an equivariant canonicalization map. Although the authors of paper [2] note an augmentation effect of the not yet aligned canonicalization map during training, this is not the cause of the equivariance in this case, and the augmetation effect goes down over time. The results in papers [1-4] are very interesting and we are not suggesting that people should favor our methods over the ones found in these papers. In fact, it is hard to see how our results would apply in the context of *finetuning* foundation models, which is the main focus in at least [1,3]. (They are in principle applicable when the models are trained from scratch).\\n\\n### Experimental results\\n\\nAs the reviewer notes, our experimental results seem to indicate that even the models with asymmetric filters become equivariant. This suggests that the sufficient condition in our theorem is not a necessary one. We do not think that this weakens our theory, it merely suggests that further developments are possible. Furthermore, in the paper we hypothesize that this might have to do with the fact that the asymmetric filters are approximately symmetric in the sense that the energy in the asymmetric part is quite small. In Appendix E we provide details on the same experiment performed with $5\\\\times 5$ filters instead of $3\\\\times 3$ filters and we see that indeed the gap between the symmetric and the asymmetric model grows when the energy of the asymmetric part is increased.\\n\\n### Experiments beyond C4\\n\\nThe reason for only testing the C4 group is that we there have a clean example of where our results apply. When going over to rotation groups of higher order, one starts to need to interpolate, and the invariance condition will not be as clear cut as before. \\nWe however agree that it is beneficial to also perform experiments in a more 'dirty' setting as far as our theory concerns, since this will provide more information about its practical relevance. We will make one other rounds of experiments, for C16. This will take some time to setup and evaluate, whence we cannot report on results now - we will do this as soon as possible.\"}" ] }
02DCEU6vSU
Gen-LRA: Towards a Principled Membership Inference Attack for Generative Models
[ "Joshua Ward", "Chi-Hua Wang", "Guang Cheng" ]
Evaluating the potential privacy leakage of synthetic data is an important but unresolved problem. Most existing adversarial auditing frameworks for synthetic data rely on heuristics and unreasonable assumptions to attack the failure modes of generative models, exhibiting limited capability to describe and detect the privacy exposure of training data. In this paper, we study designing Membership Inference Attacks (MIAs) that specifically exploit the observation that generative models tend to memorize certain data points in their training sets, leading to significant local overfitting. Here, we propose Generative Likelihood Ratio Attack (Gen-LRA), a novel, computationally efficient shadow-box MIA that, with no assumption of model knowledge or access, attacks the generated synthetic dataset by conducting a hypothesis test that it is locally overfit to potential training data. Assessed over a comprehensive benchmark spanning diverse datasets, model architectures, and attack parameters, we find that Gen-LRA consistently dominates other MIAs for generative models across multiple performance metrics. These results underscore Gen-LRA's effectiveness as an interpretable and robust privacy auditing tool, highlighting the significant privacy risks posed by generative model overfitting in real-world applications
[ "Privacy", "Membership Inference Attacks", "Generative Models" ]
https://openreview.net/pdf?id=02DCEU6vSU
https://openreview.net/forum?id=02DCEU6vSU
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v2kiJYWXcq", "RCzL0WikF4", "IDT940ZREW", "FSjb9PJzIo", "DmNAPjR8Wk", "1jXn1ww1AV" ], "note_type": [ "official_review", "official_review", "official_review", "official_review", "comment", "official_review" ], "note_created": [ 1730638197975, 1730623641021, 1730286651189, 1730152041539, 1732725402355, 1730745668671 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission12851/Reviewer_cCQH" ], [ "ICLR.cc/2025/Conference/Submission12851/Reviewer_Yeu8" ], [ "ICLR.cc/2025/Conference/Submission12851/Reviewer_6TYk" ], [ "ICLR.cc/2025/Conference/Submission12851/Reviewer_Phn8" ], [ "ICLR.cc/2025/Conference/Submission12851/Authors" ], [ "ICLR.cc/2025/Conference/Submission12851/Reviewer_tZB4" ] ], "structured_content_str": [ "{\"summary\": \"This paper introduces the Generative Likelihood Ratio Attack (Gen-LRA), a novel membership inference attack specifically aimed at detecting privacy leakage due to overfitting in generative models. Unlike prior methods, Gen-LRA employs a likelihood ratio-based hypothesis testing approach to infer membership without requiring extensive knowledge of the model structure or parameters. By leveraging density estimation techniques, the authors assess whether synthetic data generated by a model is overfitting to specific training data points, particularly in regions with outliers. The authors demonstrate that Gen-LRA significantly outperforms existing MIA methods across various generative architectures and datasets, with particular success in scenarios with low false positive rates, highlighting the nuanced privacy risks associated with generative models.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"This paper introduces the Generative Likelihood Ratio Attack (Gen-LRA), a novel membership inference attack specifically aimed at detecting privacy leakage due to overfitting in generative models. Unlike prior methods, Gen-LRA employs a likelihood ratio-based hypothesis testing approach to infer membership without requiring extensive knowledge of the model structure or parameters. By leveraging density estimation techniques, the authors assess whether synthetic data generated by a model is overfitting to specific training data points, particularly in regions with outliers. The authors demonstrate that Gen-LRA significantly outperforms existing MIA methods across various generative architectures and datasets, with particular success in scenarios with low false positive rates, highlighting the nuanced privacy risks associated with generative models.\", \"weaknesses\": \"1. The effectiveness of Gen-LRA depends heavily on accurate density estimation, which can be challenging in high-dimensional data settings. The use of kernel density estimation (KDE) or principal component analysis (PCA) for dimensionality reduction may limit applicability and accuracy. This limitation is critical because the success of the Gen-LRA method hinges on reliable density estimation, which becomes less accurate in high-dimensional spaces without significant computational expense. Inaccuracies here can undermine the method's robustness, making this the most pressing limitation.\\n2. Although Gen-LRA performs well at low false positive rates, its reliance on outlier detection may lead to elevated false positives in datasets with inherently high variability or complex distributions. False positives can impair the practical applicability of Gen-LRA in privacy-sensitive contexts, as overly cautious results may lead to unnecessary restrictions on data release. \\n3. Gen-LRA presumes that privacy leakage primarily stems from overfitting, potentially overlooking other forms of leakage that may not manifest as local overfitting. This could lead to incomplete privacy assessments, as the Gen-LRA approach might miss privacy vulnerabilities that do not align with the overfitting model. Expanding Gen-LRA\\u2019s scope to address other leakage types could enhance its overall utility.\", \"questions\": \"1.The manuscript lacks a clear explanation of the practical utility of applying MIA to synthetic data. It remains unclear why synthetic data was chosen as the focus, rather than real-world or other benchmark datasets. The authors are encouraged to provide references in the Related Work section to strengthen the justification for studying synthetic data specifically. Expounding on the unique relevance of synthetic data to MIA would better demonstrate the necessity and contributions of this study.\\n2.Several typographical errors and repeated references are present in the reference section, such as on Line 527 and Line 729. A thorough review of the references is recommended to ensure accuracy and consistency across all citations.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a new approach to do membership inference attacks for tabular data generative models. The approach first estimates the distributions of (1) the reference samples plus the target sample and (2) the reference samples with kernel density estimation, and then computes the density ratio of synthetic samples over these two distributions. The intuition is that, if the target sample were used in training, the density of synthetic samples on distribution (1) would be higher. Results across various datasets and models show that the proposed approach yields better AUC-ROC and TPR at low FPRs.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The proposed method is simple and effective.\", \"In general, the writing of the paper is clear.\", \"The paper has demonstrated results on many datasets and models.\"], \"weaknesses\": [\"The assumption that the reference data is available to the attacker is too strong.\", \"The title and the abstract do not reflect the scope and constraint of the method sufficiently.\"], \"questions\": \"First, I would like to point out that I am not fully up-to-date on the literature regarding membership inference attacks, especially those involving tabular data. As a result, I may be unable to assess the novelty of this work and might not be familiar with the common settings examined in recent literature.\\n\\n1. The paper assumes the reference data is available to the attacker. This does not seem to be very realistic to me. Section 1 discussed that a common scenario for synthetic data release is that the data owner wants to release data for open research. This implies that such data is not available to the public before that (if such data is already available, then there is no motivation or value for the data owner to release an additional dataset). That means that the attacker does not have access to the reference data either. The prior work I knew often considered attacks that do not make such assumptions (e.g., https://arxiv.org/pdf/1705.07663 and https://arxiv.org/pdf/1909.03935).\", \"the_paper_claims_that_this_setting_is_realistic_in_section_2\": \"\\\"We assume this in practice because this represents a plausible scenario for the owner of S as an attacker may be able to find comparable data in the real world...\\\" Unfortuantely, I do not fully understand this example. It would be great if the author can explain it in more detail in the rebuttal.\\n\\n2. Continuing on the above point, the paper needs to make it clearer what assumptions each of the baseline methods in Section 5 make. Which of them also makes the assumption that reference data is available to the attacker? This would clarify whether the claimed improvement comes from the relaxation of the assumptions or the fundamental advances of the algorithm itself.\\n\\n3. The paper only evaluates the proposed algorithm on tabular data. But this is not reflected in the title and abstract. By reading only the title and the abstract, the readers might be misled to think that the paper proposes and evaluates the attack on diverse data types.\\n\\n I think it is important to clarify that, as the proposed approach relies on kernel density estimation, which (as discussed in the paper) does not scale well with the data dimension. (The proposed approach relies on dimension-reduction techniques to tackle the issue.) Therefore, it is unclear if such a pipeline can work well on other more high-dimensional and complicated data such as images and text. \\n\\n4. How do you determine the kernel size and the type of the kernel in the experiments? Is the algorithm sensitive to that?\\n\\n5. Section 5 mentioned that \\\"For Gen-LRA, we found that the choice of k can have a small impact on the performance of the attack (See Appendix A.3), we therefore use the results of the best k choice for each run as the goal for an MIA is to characterize the maximal empirical privacy risk.\\\" I understand that choosing the best k could help \\\"characterize the maximal empirical privacy risk\\\". However, this table is mainly for comparing between different baselines. The comparison would be unfair if you chose the best hyper-parameter for your own approach while not doing that for the baseline methods.\\n\\n7. The discussion in Section 6.2 is nice, but it would be more self-contained if the paper could describe how DCR works in the main text.\", \"other_minor_questions\": \"1. Section 1: \\\"We demonstrate that Gen-LRA identifies a different source of privacy leakage relative to other commonly used MIAs.\\\" It would be better to clarify what \\\"the different source\\\" means here. I could only understand it after reading Section 5.\\n\\n2. Line 116 and 117: what are M and D? These notations do not seem consistent with what was used before.\\n\\n3. Line 127: typo on the left quotation mark\\n\\n4. Line 266: missing a )\", \"flag_for_ethics_review\": \"['Yes, Privacy, security and safety', 'Yes, Potentially harmful insights, methodologies and applications']\", \"details_of_ethics_concerns\": \"The paper focuses on membership inference attacks, which could be leveraged by adversaries to launch privacy attacks.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces Gen-LRA, a novel membership inference attack (MIA) methodology for evaluating privacy risks in synthetic tabular data. The authors propose a hypothesis testing framework that computes a likelihood ratio specifically targeted at identifying any local overfitting of the target record. The method requires minimal assumptions, just access to the released synthetic dataset and a reference dataset. They find their method to outperform baselines from the literature across 15 datasets. They further find their method to be particularly successful against outliers, in contrast with other MIAs from the literature.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"Technically novel, and interesting, way to compute the membership inference inference signal from synthetic data. The method is theoretically grounded, computationally efficient and relies on limited assumptions for the attacker.\", \"They show the method to outperform a range of MIAs from the literature\", \"Comprehensive evaluation of the attack across 15 datasets\", \"Authors include intuitive examples (eg Fig 1 and Sec 6.2) that are well explained and help the understanding of the paper.\"], \"weaknesses\": \"(More details see questions)\\n\\n- My main concern comes down to a lack of related work being discussed. A range of important works have studied MIAs against synthetic tabular data using shadow modeling [1,2,3]. While I understand that these works are computationally more expensive and additionally rely on the attacker's knowledge of the training algorithm, I find these works to be very relevant to position this paper and its findings. \\n- Limited secondary insights with experimental depth. For instance, to make the claim that the method works better for outliers (especially compared to other methods), section 5.3 is mostly anecdotal. \\n\\n[1] Stadler, T., Oprisanu, B., & Troncoso, C. (2022). Synthetic data\\u2013anonymisation groundhog day. In 31st USENIX Security Symposium (USENIX Security 22) (pp. 1451-1468).\\n\\n[2] Houssiau, F., Jordon, J., Cohen, S. N., Daniel, O., Elliott, A., Geddes, J., ... & Szpruch, L. TAPAS: a Toolbox for Adversarial Privacy Auditing of Synthetic Data. In NeurIPS 2022 Workshop on Synthetic Data for Empowering ML Research.\\n\\n[3] Meeus, M., Guepin, F., Cre\\u0163u, A. M., & de Montjoye, Y. A. (2023, September). Achilles\\u2019 heels: vulnerable record identification in synthetic data publishing. In European Symposium on Research in Computer Security (pp. 380-399). Cham: Springer Nature Switzerland.\", \"questions\": [\"Can you expand the related work to also include the shadow-modeling based MIAs?\", \"To truly understand the contribution, could you implement the shadow-modeling based MIAs [1,2,3] as well and report their results? Right now, the Gen-LRA method seems to be better than the prior work you consider, and does so with limited assumptions for the attacker and with limited computational cost. How does this change when the attacker now (i) has knowledge of the training algorithm and (ii) has the computational resources to train shadow models? Could authors implement these shadow-model MIAs and report the results alongside Gen-LRA? This would help to position the method and its results in the literature, giving a clear understanding of the impact of certain assumptions and computational cost on the MIA results.\", \"Similarly, the work on shadow modeling MIAs also discusses disparate vulnerability of outliers [1,2,3]. Stadler et al [1] finds outliers to be more vulnerable than randomly selected records, while Meeus et al [3] proposes a method to identify more vulnerable records. Could authors have more elaborate results for the outlier discussion (e.g. show MIA results for outliers vs random points across datasets) and relate these findings to prior work? While the fact that Gen-LRA focuses on outliers is distinct from distance-based methods, these findings might not be very different than the ones in shadow-modeling based MIAs.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes a novel membership inference attack on synthetic data generators called Gen-LRA, based on estimating a likelihood ratio between the synthetic data coming from a reference distribution vs. it coming from the reference distribution with a target point included. Gen-LRA is benchmarked againt several competing attacks on a variety of datasets, where Gen-LRA generally outperforms the competition.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The likelihood ratio that Gen-LRA estimates is novel to my knowledge, and seems to be closer to the likelihood ratio that would be theoretically optimal than what previous work has looked at. The paper is easy to understand, and the writing is generally polished.\\n\\nLooking at TPR @ low FPR is good practice, and too often neglected in the MIA literature. The paper could even highlight these results further: most of the AUC-ROC scores for all methods are close to random guessing, but Gen-LRA is much more accurate than random guessing at FPR = 0.001.\", \"weaknesses\": \"Using the PCA+KDE density estimator for DOMIAS is not fully fair, since the DOMIAS paper used a more sophisticated density estimator which was found to perform better than the KDE. Of course, the same estimator could also improve the results of Gen-LRA, and PCA+KDE could be computationally cheaper, but these should be checked empirically.\\n\\nUsing PCA may limit the applicability of outlier overfitting detection for outliers with rare categorical values. For example, consider the detection of overfitting on datapoints of French people on the Adult dataset. PCA weights the input dimensions based on how much variance they have, so the indicator for being French would have a very low weight (<1% of the data is French). As a result, the PCA outputs would be very similar between French and non-French people, and Gen-LRA would not be able to detect overfitting affecting French people. Unless I'm completely mistaken about this phenomenon, this should be mentioned as a limitation.\\n\\nFor a similar reason, you should check if datapoints with high DCR score have similarities. It could be that they do, but UMAP is not considering these important. This could change the interpretation of Figure 2 that DCR does not target specific outlier regions. \\n\\nYou should also discuss the fact that Ward et al. (2024) report a very similar finding to your Figure 2 with their MIA. As a part of this, it would be interesting to see analogues of Figure 2 for the other MIAs used as baselines.\\n\\nPlease include separate results from each dataset in addition to the mean results across datasets. The datasets could have significant performance differences that the aggregation hides. I'm also not sure if the standard deviations of performance across different datasets are meaningful in any way.\", \"minor_points\": [\"The paper should make the differences between DOMIAS and Gen-LRA clearer, since the methods are fairly similar.\", \"It not clear what $\\\\mathbb{P}\\\\cup \\\\{x^*\\\\}$ precisely is, which makes the motivation leading to Equation 4 seem a bit handwavy.\", \"Contribution 1: this sentence is a bit unclear, making it seem like the null and alternative hypotheses are the same.\", \"Line 172: capitalise \\\"equation 4\\\".\", \"Line 266: missing parenthesis.\", \"Line 346: \\\"scale\\\" is ambiguous, I would suggest \\\"normalise\\\" if that is what you are doing.\", \"Several references are missing the publication forum, for example Durkan et al. (2019), Ganev and De Cristofaro (2023).\"], \"questions\": \"No further questions.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"withdrawal_confirmation\": \"I have read and agree with the venue's withdrawal policy on behalf of myself and my co-authors.\", \"comment\": \"We thank the reviewers for their high quality and helpful reviews. Given this feedback, we believe that this work would be better presented with additional experiments and writing revisions that are outside of the scope of this rebuttal window. For these reasons, we are withdrawing our submission.\"}", "{\"summary\": \"The paper describes a membership inference attack on generative models. It requires a set of examples generated by the model, S, and a set of reference examples, R, presumably from the same distribution as the data the model was trained on. Then to guess whether some new point x* was part of the training data, it estimates the likelihood ratio of S between a model trained on R vs. a model trained on $R \\\\cup \\\\{x*\\\\}$ using two kernel density estimators. It then thresholds on the likelihood ratio. Experimental results demonstrate impressive improvements compared to baseline models, particularly when evaluated with the critical \\\"true positive rate at low false positive rate\\\" metric.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"The idea of performing MIA on a generative model by using likelihood ratio of generated data between models with and without the targeted example is very natural and efficient. I'm not surprised that it is very effective, as demonstrated in the experiments. The paper is mostly well-written and well-motivated, and to my knowledge original.\", \"weaknesses\": \"I'm afraid the specific approach of using kernel density estimators will limit the method's applicability to low-dimensional tabular datasets. I would love to see this idea generalized to higher-dimensional data, probably using something that will scale better than KDEs.\", \"questions\": \"1. Although I could follow the gist of the idea, some of the notation is not precisely defined. $p_{\\\\mathbb{P} \\\\cup x*}$. It might be clearer to skip Eq.s 3/4 and jump to Eq 5.\\n1. Do you have any ideas for how to generalize this to forms of data that are not amenable to KDEs (even after applying PCA)?\\n1. Section 5.3 is not clear to me. What exactly is the experiment here, and what is it supposed to demonstrate?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
029hDSVoXK
Dynamic Neural Fortresses: An Adaptive Shield for Model Extraction Defense
[ "Siyu Luan", "Zhenyi Wang", "Li Shen", "Zonghua Gu", "Chao Wu", "Dacheng Tao" ]
Model extraction aims to acquire a pre-trained black-box model concealed behind a black-box API. Existing defense strategies against model extraction primarily concentrate on preventing the unauthorized extraction of API functionality. However, two significant challenges still need to be solved: (i) Neural network architecture of the API constitutes a form of intellectual property that also requires protection; (ii) The current practice of allocating the same network architecture to both attack and benign queries results in substantial resource wastage. To address these challenges, we propose a novel \textit{Dynamic Neural Fortresses} (DNF) defense method, employing a dynamic Early-Exit neural network, deviating from the conventional fixed architecture. Firstly, we facilitate the random exit of attack queries from the network at earlier layers. This strategic exit point selection significantly reduces the computational cost for attack queries. Furthermore, the random exit of attack queries from earlier layers introduces increased uncertainty for attackers attempting to discern the exact architecture, thereby enhancing architectural protection. On the contrary, we aim to facilitate benign queries to exit at later layers, preserving model utility, as these layers typically yield meaningful information. Extensive experiments on defending against various model extraction scenarios and datasets demonstrate the effectiveness of DNF, achieving a notable 2$\times$ improvement in efficiency and an impressive reduction of up to 12\% in clone model accuracy compared to SOTA defense methods. Additionally, DNF provides strong protection against neural architecture theft, effectively safeguarding network architecture from being stolen.
[ "Model Extraction Defense" ]
Accept (Poster)
https://openreview.net/pdf?id=029hDSVoXK
https://openreview.net/forum?id=029hDSVoXK
ICLR.cc/2025/Conference
2025
{ "note_id": [ "rvRdbzPCIy", "eZyr33wMG6", "dHvqGcF4MN", "Ef7KCRhqMy", "CXqIEtrkoQ", "5SBOcCjypX", "1VP9nuyC4G" ], "note_type": [ "decision", "official_review", "official_review", "meta_review", "official_review", "official_review", "official_review" ], "note_created": [ 1737523691827, 1730697380551, 1730718075531, 1734793893270, 1730700974929, 1730714970912, 1730301204098 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission5212/Reviewer_FTna" ], [ "ICLR.cc/2025/Conference/Submission5212/Reviewer_dSR3" ], [ "ICLR.cc/2025/Conference/Submission5212/Area_Chair_jDrQ" ], [ "ICLR.cc/2025/Conference/Submission5212/Reviewer_zh4c" ], [ "ICLR.cc/2025/Conference/Submission5212/Reviewer_xham" ], [ "ICLR.cc/2025/Conference/Submission5212/Reviewer_CXR5" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Poster)\"}", "{\"summary\": \"Model extraction is a type of attack where an attacker tries to replicate a victim model to either:\\n1. Estimate the model\\u2019s parameters to emulate the model\\u2019s performance\\n2. Copy the model\\u2019s architecture, to recreate the model as-is.\\n3. Get protected knowledge of the training data of the victim model, to better understand the data distribution it was trained on, so that other type of adversarial attacks can be done.\\n\\nExisting defense strategies are costly \\u2013 they do not differentiate between benign and malicious queries from an attacker and this form of defense allocates the same computational power to both. This paper provides a novel way to tackle model extraction attacks \\u2013 Dynamic Neural Fortresses. \\n\\nThey propose an early-exit strategy wherein the victim model has built-in early exits routes that the model can take and provide outputs that are OOD from it\\u2019s expected input-output combination. If an input query matches an early-exits threshold, the model inference exits with the output at that stage.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents an interesting defenseive method to counter model extraction attacks. The paper\\u2019s novelty lies in the core idea of using a dynamic exit strategy based on the input query. While early exit strategies have been explored in the context of neural networks, their application to defensive methods is novel.\\n2. The paper is well written, and the core idea is simple to understand. The language is lucid but see weakness 2, 3.\\n3. The paper is well organized with a clear progression between sections. Figure 1 greatly aids clarity in trying to understand the pipeline, however see weakness 2.\\n4. Experimental evaluation is robust and does seem to support the author\\u2019s claims that DNF achieve substantial reduction is successful model cloning.\\n5. This paper addresses a growing concern in the space of AI/ML model deployment \\u2013 protecting against model cloning and privacy and intellectual rights protection. This work does have the potential to help drive forward work in defense space for these attack types.\", \"weaknesses\": \"1. Despite strength 5, this method can be adapted widely only after these weaknesses are addressed and questions explored.\\n2. Should make better use to visual elements \\u2013 probably, atleast in the appendix, add an example of what an attack query would look like, why the victim system would classify the query as attack and what the victim model\\u2019s behaviour would be on it, how early would it exit?\\n3. Math is useful and helps to aid the reader\\u2019s understanding but at times also hampers readability. Especially in textual sections it breaks the flow of readers. Something that may help is condense the math and limit them to equations that can be repeatedly referenced or have a table of symbol notations that readers can refer to.\\n4. Some sections could use with clearer explanations - OOD Data Learning Objective, underlying theory for Entropy and IB regularization. Maybe providing examples around mutual information or ER could help.\\n5. The paper does provide some explanation about Entropy and IB regularization but could expand a little more on how mutual information reduction leads to lower predictability and can be leveraged for distinguishing between benign and malignant queries.\\n6. Maybe a comparison with other information-theory based approaches such as standard adversarial training would help drive home the imminent advantages on DNF. Another set of comparisons that could strengthen the paper\\u2019s results are against other dynamic architectures (example \\u2018BranchyNet\\u2019).\\n7. The paper uses ER to determine optimal exits from the model\\u2019s inference. However the choice of thresholds is only briefly discussed. Maybe an ablation study of various hyperparameters, exit thresholds and entropy weights could help explain the choice a certain threshold or explain the assumptions that the authors may have made.\", \"questions\": \"1. Concepts related to entropy and IB regularization are presented with some mathematical rigor and learning objectives for both ID and OOD data are presented with entropy and IB regularization contratints; However some additional insights into potential limitations are necessary \\u2013 How would the strategy perform under adaptive attacks with a much varied and increasingly sophisticated OOD spectrum? And how it would impact models that aim for domain generalizability and to incorporate that OOD spectrum into their model\\u2019s capabilities?\\n2. How does this defensive method translate to multi-modal architectures like VLMs. Or multi-pipeline methods where each branch operates on different modalities? Or ML methods where different models are trained for different modalities and their outputs are combined (via some aggregation)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper presents \\u201cDynamic Neural Fortress\\u201d or DNF framework as a defense against Model Extraction Attacks. These attacks allow an adversary to create a copy of a pre-trained model accessible via black-box APIs, posing risks to proprietary models. The authors identify two main challenges in current defenses: (1) Neural Network architecture protection, a thing that is taken for granted in previously proposed attacks by using the same model architecture for victim and clone models, and (2) optimizing computational resources by avoiding allocation of equal resources to both benign and attack queries.\\n\\nThe authors implement an Early-Exit neural network wrapper (EENN) on top of a trained model. This wrapper facilitates random exits at earlier layers for attack queries while preserving model utility by making benign queries exit at later layers. The authors assume the usage of out-of-distribution (OOD) data by attackers in most cases, but there are some experiments conducted for in-distribution (ID) data as well. Using concepts from deep information bottleneck theory, the authors optimize mutual information between input data, latent features, and output labels for training the EENN model. \\n\\nThe proposed method has been evaluated via testing on various architectures and datasets, and compared against other state of the art defenses.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The proposed idea of implementing early exits as a defense against model extraction is novel and sound.\", \"The method is easily adaptable to different architectures like ResNets and ViTs.\", \"The use of entropy and information bottleneck theory is sound and well-suited to the goal of reducing extractable information for the attacker.\", \"The experiments conducted cover various scenarios, models and datasets validating its generalizability. The performance comparisons with state-of-the-art defenses further strengthen its credibility.\", \"The ablation study is thorough and captures various scenarios that highlight the effectiveness of the proposed method and its components.\"], \"weaknesses\": [\"The paper presents a technically sound idea, but the presentation is poor and needs major revisions. I am listing the weaknesses sectionwise.\", \"### Related work:\", \"The related work is not organized properly, and some works are not cited in their appropriate sections, although they are cited later in the paper. For example, ActiveThief by Pal et al. (2020) [1] should be present under functionality stealing.\", \"When a model extraction attack is data-based, the data might be natural or synthetic. For E.g., I can generate a dataset of 10,000 images from a pretrained generative network and use that for model extraction. This would still fall under the category of data-based model extraction. Data-free model extraction means that the data used for stealing is generated based on some information received from the victim.\", \"Therefore, restructuring the related work section is necessary here.\", \"### Methodology:\", \"The steps followed to convert a pre-trained victim model into an EENN are not easily followed. A network is trained on the ID data first. Then exit classifiers are added on top of it. Then, an OOD generator is used to generate OOD data, which is then passed through the original network without the exit networks for inference. The steps followed after this are not written in a coherent manner. One has to go through Algorithm 1 to get a clear picture of the training process.\", \"Overuse of the term specific to start two consecutive paragraphs (224-235 and 236-241) and even inside the paragraphs when the sentences contained in both paragraphs are not specific at all.\", \"### Experimentation:\", \"The authors should differentiate between the DFME and DBME settings in more detail. In line 387, it is assumed that the reader will know that they are talking about the DFME setting instead of the soft-label setting. This also invites confusion regarding the budget difference between the soft and hard label settings, where the budget should be the same for valid comparison.\", \"For the DFME setting, one clone model architecture should be the same as the victim model for valid comparison (Resnet-34 in this case). Previous methods, like the prediction poisoning [2] method used by authors for comparison, have conducted experiments that keep the victim architecture for the stolen model. Moreover, the proposed method is not better than MeCo for the CIFAR-10 dataset. This should be analyzed and discussed.\", \"For the DBME setting, using the random strategy for sampling images is not ideal. It has been shown in the ActiveThief [1] paper that using an uncertainty-based sampling method is more effective.\", \"To showcase the effectiveness of the in-distribution defense, using JBDA as the attack strategy is fairly obsolete, and the paper cited needs to be corrected. The paper that proposed the attack is [3]. The authors should use either ActiveThief or Knockoff nets attack for evaluation as they are more recent and utilize intelligent sampling-based strategies for attack. If an actual attacker has access to in-distribution data, they will try to use the best strategy possible.\", \"To demonstrate the defense\\u2019s effectiveness against model architecture stealing, the authors pick the latest attack by Carlini et al. but fail to show effectiveness against previously cited work, specifically \\u201cTowards reverse-engineering black-box neural networks. In International Conference on Learning Representations, 2018.\\u201d that perform attack on imagenet models. Considering that this was one of the major claims made by the authors, they should evaluate this aspect thoroughly.\", \"### Grammar:\", \"The paper has incoherent paragraphs, spelling mistakes, and redundant sentences. Some of them are listed below:\", \"Line 225, it should be \\u201cconvert\\u201d instead of \\u201ccovert.\\u201d\", \"In Table 1 and Table 2, the spelling of label is incorrect.\", \"Appendix D, Lines 778-779, same line repeated twice.\"], \"citations\": [\"[1] Pal, Soham, et al. \\u201cActivethief: Model extraction using active learning and unannotated public data.\\u201d Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 01. 2020.\", \"[2] Orekondy, Tribhuvanesh, Bernt Schiele, and Mario Fritz. \\u201cPrediction poisoning: Towards defenses against dnn model stealing attacks.\\u201d arXiv preprint arXiv:1906.10908 (2019).\", \"[3] Papernot, Nicolas, et al. \\u201cPractical black-box attacks against machine learning.\\u201d Proceedings of the 2017 ACM on Asia conference on computer and communications security. 2017.\"], \"questions\": [\"The authors claim their approach falls under the model extraction prevention defense category. Still, it works like a detection approach where the OOD detector is built into the model itself and thus relies heavily on the OOD data used for classification. The results shared by authors, to argue otherwise, are insufficient. I would ask the authors to include more experiments for this argument.\", \"If the model is trained to early exit in the case of OOD samples, but the labels used are from the original neural network (essentially the last possible exit), what is the accuracy of the model on OOD data used for training the model? I suspect that the early exit model misclassifies OOD data with high confidence. If it were learning the original network\\u2019s output labels for OOD data, then the defense would not work for the hard-label setting as the attacker would still receive a large portion of the original network\\u2019s labels as output with some erroneous ones.\", \"Regarding the exit point evaluation ablation study, I would like to know the accuracy at each exit and the exact number of ID and OOD samples passing through each exit instead of terms such as \\u201cover half,\\u201d etc.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"This paper adopted a dynamic early exit neural network to defend model extraction attacks to not only preserve the model performance but also increase inference speed. After rebuttal all reviews are clearly positive and all questions are well addressed. It should be a clear accept.\", \"additional_comments_on_reviewer_discussion\": \"I think the authors did a good job for rebuttal, after discussion, there is no very serious issues remaining unsolved and all reviewers acknowledge this. The only interesting point is that there is one reviewer gave a score of 1 in the beginning and did not reply to rebuttal until two days before rebuttal ends. I sent an email to him to make sure he read all rebuttals and the second day he suddenly modified his score to 8 without any further questions. This is confusing to me and is too abnormal, so I decide to eliminate this review as an outlier. But other reviews and rebuttals are enough to decide. It's still a clear accept.\"}", "{\"summary\": \"The dynamic neural fortress (DNF) defense method introduced in this paper employs a dynamic early exit neural network to defend model extraction attacks. This approach effectively provides simultaneous protection for model functionality, network architecture, and enhanced defense efficiency against these threats. Extensive experiments demonstrate that the proposed defense method outperforms SOTA model extraction defenses in terms of both effectiveness and efficiency.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"The first defense framework simultaneously offers three key protective benefits: protecting the functionality, and model architecture, while improving the efficiency of the inference.\", \"An innovative design of the loss function is achieved by incorporating the Information Bottleneck (IB) theory.\", \"The experimental design is well-structured and covers various scenarios, effectively validating the method's effectiveness.\"], \"weaknesses\": [\"The claims regarding the protection of model architecture are overstated. Early Exit (EE) mechanisms indeed prevent attackers from executing the entire pipeline of DNN, therefore protecting the entire model architecture information from being leaked. However, the authors fail to provide how attackers might exploit this vulnerability to steal the model architecture when executing the entire network. Furthermore, EE mechanisms typically occur in the last few layers of DNNs; therefore, while the proposed approach may protect certain layers, it only works those that are unexecuted, leaving the majority of the neural network vulnerable (if there are effective attacks that can steal the model architecture). The authors should consider discussing these limitations in a dedicated section titled \\\"Limitations.\\\"\", \"The definitions of out-of-distribution (OOD) and in-distribution (ID) data lack clarity. It is unclear why the authors consider OOD data to be \\\"illegal\\\" while ID data is deemed \\\"legal,\\\" and the rationale behind the corresponding loss term needs further explanation. Additionally, the authors aim to minimize the mutual information between $X_{id}$ and $Z_{id}$ in Eq. (3). However, this approach could potentially compromise the overall performance of deep neural networks (DNNs). The authors should provide additional clarification on why a reduced mutual information between $X_{id}$ and $Z_{id}$ does not impact the prediction accuracy.\", \"Table 12 indicates that queries drawn from ID dataset exit at Exit2 over 90%, while the OOD queries only exit at about 75% at the same stage. This discrepancy seems inconsistent with the motivation behind two loss terms in Eq. (3) and Eq. (4). The authors should explain this discrepancy and discuss how it impacts the effectiveness of the proposed defense mechanism. I would like to suggest the authors provide a more detailed analysis of the exit patterns for ID vs OOD data.\", \"The explanation for choosing a specific mutual information optimization method to achieve the defense objectives lacks a deeper theoretical explanation and intuitive justification, making it challenging to fully follow the principles behind the proposed method.\", \"The experiments conducted to protect the model architecture appear limited, which does not sufficiently demonstrate the contribution related to model architecture protection mentioned in the paper. Consider adding additional experiments and evaluation metrics specifically designed to assess the robustness of the model architecture against potential theft.\", \"It would be advantageous to include experiments that investigate the correlation between accuracy and exit points, providing a clearer visualization of the early exit mechanism's impact. I would like to suggest a graph showing accuracy vs. exit points for both ID and OOD data or report a statistical analysis of this relationship.\", \"It seems that all datasets utilized are classification datasets, which makes it difficult to validate the effectiveness of the proposed method in other tasks and domains.\", \"The notations in this article have been used repetitively, e.g., $r$.\"], \"questions\": [\"Can the proposed defense be easily extended to other tasks and domains, such as object detection and NLP applications?\", \"Does the number of exit points impact the performance of the proposed defense?\", \"According to the design, earlier blocks are intended to reduce the model's predictive capability. However, it is notable that the ID dataset maintains high accuracy even after exiting at Exit2. This raises questions about the effectiveness of the defense mechanism. Moreover, the OOD dataset still retains 35% of its data after passing through the last two blocks. What is the observed defense effect in this case?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"This paper introduces a new defense against model extraction attack for model architecture and model utility. The key idea is to use multi-exit neural network architecture and its random exit mechanism to protect the network's architecture while ensuring the efficiency. For benign queries, the authors trains the early-exit model to distinguish OOD data (attack queries) and in-distribution data to ensure the model utility.\\nFinally, the authors show that DNF outperforms previous defenses and evaluate the adaptive attacks.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Good motivation. The authors adopt multi-exit architecture to defend architecture extraction attack, which is a well motivated and interesting idea.\", \"Extensive evaluation. The authors not only evaluate the defense effectiveness but also adaptive attacks.\"], \"weaknesses\": \"- The assumption of attack data are OOD data, although widely adopted in prior work, should be more carefully justified. Meanwhile, as the model's training data are unknown to the user, benign queries may also be OOD data. DNF might decrease the model utility in this case.\\n- The main part of paper (Section 4) is somehow hard to follow. I would suggest the author to simplify the notations or subscripts. Moreover, I also suggest the authors to provide an overview figure to replace some descriptions.\\n- Although the authors investigate the adaptive attacks, the adversary can still design more powerful attack by exploiting the multi-exit model. Please discuss more about the potential vulnerability of multi-exit architecture and compare with prior attacks on multi-exit networks.\\n\\n[1] Auditing Membership Leakages of Multi-Exit Networks. ACM CCS 2022.\\n\\n[2] Model Stealing Attack against Multi-Exit Networks. arXiv:2305.13584.\\n\\n[3] Mind your heart: Stealthy backdoor attack on dynamic deep neural network in edge computing. IEEE INFOCOM 2023.\\n\\n[4] Aegis: Mitigating Targeted Bit-flip Attacks against Deep Neural Networks. Usenix Security 2023.\\n\\n[5] Prediction Privacy in Distributed Multi-Exit Neural Networks: Vulnerabilities and Solutions. ACM CCS 2023.\", \"questions\": \"Can you provide a formal definition or description of in-distribution and out-distribution data in this paper's setting? How to distinguish the normal user data (OOD) and attack data (OOD)?\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"In this paper, a defense against model stealing attacks (targeting either the model architecture or its functionality) based on a multi-exit neural network is proposed. The main idea is to output accurate prediction scores for ID data from the later network exits, as well as uninformative scores for OOD data from the earlier exits. To do so, for each network exit, a thresholded classifier is trained on the respective intermediate layer representation with a specifically designed loss, which maximizes the aforementioned objective using concepts from information theory. During the deployment, an exit is chosen for a sample when the maximum score of an exit classifier exceeds the respective threshold.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"The paper presents a clearly novel idea to address a very relevant issue. Indeed, to the best of my knowledge, this is the first application of a multi-exit neural network to defend against model extraction attacks.\", \"The proposed network architecture can also reduce the inference time during deployment.\", \"The approach is very intuitive and well-justified.\", \"The reported results are promising.\"], \"weaknesses\": [\"90% of IID samples exit in the first 3 exits. Although this can be viewed as a benefit (it reduces the inference time), on the other side, the defense mechanism will produce less informative outputs for those samples. The impacts of these effects should be clearly understood.\", \"I appreciate the fact that the authors consider different types of attacks and try to implement adaptive ones. However, a best practice when dealing with security is to simulate a worst-case scenario against the strongest attack. This helps understand the limitations of the defense and estimate lower bounds of robustness in these settings - even if, in practice, they are unlikely to occur. In this case, the adaptive attacks should be implemented using model extraction techniques that rely on some knowledge about the training data distribution. This assumption is not too unrealistic, as it might happen that the attacker (who knows the domain on which the model is applied) is able to gather in-distribution data from public domains - for instance, if the model is a malware detector, it should be very easy to collect samples and also very likely to have some overlap between them and the training data used by the victim. In other cases, the attacker might possess a subset of or all the training data, and she could easily train its own model, but she is rather interested in reproducing the exact model functionality and reproducing its decision boundaries to build a surrogate model and use it for other attacks (like evasion ones, aka adversarial examples).\"], \"questions\": [\"Could you please estimate the impact of early exiting for IID samples? For instance, you might compute the misalignment in model outputs for IID samples when they exit early with respect to being forwarded into the entire network.\", \"Could you please evaluate the defense against a worst-case attacker, enhancing the already implemented adaptive attacks with (partial) knowledge of the training data distribution?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}" ] }
01wMplF8TL
INSTRUCTION-FOLLOWING LLMS FOR TIME SERIES PREDICTION: A TWO-STAGE MULTIMODAL APPROACH
[ "Yu Meng", "Malik Tiomoko" ]
We introduce Text-Informed Time Series Prediction (TITSP), an innovative multimodal framework that integrates textual knowledge with temporal dynamics using Large Language Models (LLMs). TITSP employs a two-stage process that bridges numerical data with rich contextual information for enhanced forecasting accuracy and interpretability.In the first stage, we present AutoPrompter, which captures temporal dependencies from time series data and aligns them with semantically meaningful text embeddings.In the second stage, these aligned embeddings are refined by incorporating task-specific textual instructions through LLM. We evaluate TITSP on several multimodal time series prediction tasks, demonstrating substantial improvements over state-of-the-art baselines. Quantitative results reveal significant gains in predictive performance, while qualitative analyses show that textual context enhances interpretability and actionable insights. Our findings indicate that integrating multimodal inputs not only improves prediction accuracy but also fosters more intuitive, user-centered forecasting
[ "Large Language Models", "Time-series Prediction", "Multi-modal", "Instruction-following" ]
Reject
https://openreview.net/pdf?id=01wMplF8TL
https://openreview.net/forum?id=01wMplF8TL
ICLR.cc/2025/Conference
2025
{ "note_id": [ "yIFLKhoq1b", "qTzq3ayPJ3", "qGHr3fuaxB", "pipXklFh72", "lnU1bacqhQ", "gU5YYFB4qu", "eadJpVwh8t", "ZzrTyCsj7t", "ZL3Zc8835C", "XgyvdYE1ln", "VDCHyFBDlM", "Udk2U7lYYw", "RgQzZOxFAX", "NsQIByRUPO", "KnW1mGxVFx", "IICO2K0V4D", "HNantkZwp3", "GSmXJdj7HA", "C7yO82j6FM", "85x40s7jSQ", "6wfNfd2oup", "6rIeTQhMDr", "3qv3dT4N3R", "2QUpfkxDtD", "1qSqW4mfU6", "0lF97M7CMQ" ], "note_type": [ "decision", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment" ], "note_created": [ 1737524052559, 1732598924451, 1731931218677, 1732174198821, 1732174269565, 1730199891836, 1731924541688, 1730670396894, 1732173571215, 1732334942657, 1731931010638, 1731925115254, 1731894943979, 1734653631779, 1732220511230, 1731932283644, 1730302004340, 1732597574189, 1731932869712, 1732603619049, 1731925218784, 1731931927581, 1732174335714, 1730296779771, 1731930751362, 1731931481026 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_GGqR" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_We4d" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_mT1k" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_We4d" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Area_Chair_rCos" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_mT1k" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_GGqR" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_YdJR" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Reviewer_YdJR" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ], [ "ICLR.cc/2025/Conference/Submission10424/Authors" ] ], "structured_content_str": [ "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"comment\": \"I appreciate the authors' responses, which partially address my concerns. However, I believe the writing quality of this paper does not meet the standards expected for this conference. I encourage the authors to review some of the recent papers they have cited, such as:\\n\\n- Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J. Y., Shi, X., ... & Wen, Q. (2023). Time-llm: Time series forecasting by reprogramming large language models. arXiv preprint arXiv:2310.01728.\\n- Liu, Y., Hu, T., Zhang, H., Wu, H., Wang, S., Ma, L., & Long, M. (2023). itransformer: Inverted transformers are effective for time series forecasting. arXiv preprint arXiv:2310.06625.\\n\\nThese papers exemplify the level of clarity and structure that is expected. I recommend the authors consider these examples to improve the organization and presentation of their work. Consequently, I have decided to maintain my original score.\"}", "{\"title\": \"Response To Reviewer GGqR- result table\", \"comment\": \"For convenience, we also provide table 5 in the paper about the test results.\\n\\n### Table: Comparison of Compliance Rate (CR) and MSE for TITSP, Time-LLM, Qwen4MTS, UniTime, and Llama-3.1-8B across various instructed actions with highlighted best (in **bold**) and second-best (in _underlined_) results.\\n\\n| **Instruction** | **TITSP (CR)** | **TITSP (MSE)** | **Time-LLM (CR)** | **Time-LLM (MSE)** | **Qwen4MTS (CR)** | **Qwen4MTS (MSE)** | **UniTime (Qwen) (CR)** | **UniTime (Qwen) (MSE)** | **Llama-3.1-8B (CR)** | **Llama-3.1-8B (MSE)** |\\n|---------------------------------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------------------|-------------------------|------------------------|------------------------|\\n| Linear Growth and Linear Decay | **0.83** | **1.15** | 0.38 | 3.45 | _0.69_ | _1.90_ | 0.54 | 2.73 | 0.32 | 4.95 |\\n| Linear Growth and Linear Decay | **0.79** | **1.17** | 0.49 | 2.85 | **0.79** | _1.34_ | 0.57 | 2.28 | 0.41 | 2.80 |\\n| Linear Trend Up | _0.90_ | **1.03** | 0.63 | 1.71 | 0.76 | _1.08_ | 0.63 | 1.65 | **0.91** | 1.15 |\\n| Linear Trend Down | **0.87** | **0.88** | 0.64 | 1.55 | 0.71 | 1.36 | 0.51 | 1.59 | _0.85_ | _0.92_ |\\n| Exponential Growth | **0.89** | **1.33** | 0.58 | 2.59 | _0.63_ | _2.07_ | 0.60 | 2.38 | 0.58 | 2.35 |\\n| Exponential Decay | **0.84** | **1.25** | 0.56 | 2.26 | 0.67 | 2.10 | _0.69_ | _2.05_ | 0.46 | 2.39 |\\n| Keep Stable | **0.98** | _0.35_ | 0.76 | 0.76 | 0.93 | 0.48 | 0.83 | 0.62 | _0.95_ | **0.33** |\\n| Decrease Amplitude | **0.90** | _0.91_ | 0.85 | 1.04 | **0.90** | **0.84** | 0.79 | 1.09 | 0.52 | 1.89 |\\n| Increase Amplitude | **0.94** | **0.94** | 0.79 | 1.20 | _0.89_ | _0.96_ | 0.81 | 1.03 | 0.75 | 1.35 |\\n| Logarithmic Growth | _0.77_ | _1.65_ | 0.49 | 2.31 | **0.79** | **1.55** | 0.60 | 1.73 | 0.55 | 1.94 |\\n| Logarithmic Decay | **0.83** | **1.68** | 0.48 | 2.19 | _0.81_ | _1.69_ | 0.67 | 2.04 | 0.63 | 2.60 |\"}", "{\"title\": \"Response To Reviewer mT1k\", \"comment\": \"Dear reviewer: I wonder if our response can solve your concerns! Thank you!\"}", "{\"title\": \"Response to Reviewer GGqR\", \"comment\": \"Dear reviewer: I wonder if our response can solve your concerns! Thank you!\"}", "{\"summary\": \"The paper proposes a novel two-stage for multimodal forecasting through historical data and textual cues that are useful for LLM-based forecasters. The multimodal framework is evaluated on numerous multimodal forecasting tasks. The paper provides a setup to include expert opinions for a forecasting problem.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": \"The strengths include the relevance of the problem of text-aided forecasting and the novelty of the prompting method. The methodology section is comprehensive and well-described, and the techniques and experiments have been explained in detail and are easy to follow. The figures convey the overall idea and highlight the improvements over the no-instruction setup.\", \"weaknesses\": \"The primary weaknesses of the paper are as follows:\\n\\n1. **Incomplete Literature Coverage**: Section 2.2 does not fully address relevant multimodal forecasting models, omitting key references such as UniTime ([https://dl.acm.org/doi/10.1145/3589334.3645434](https://dl.acm.org/doi/10.1145/3589334.3645434)).\\n\\n2. **Limited Comparative Analysis**: The results lack sufficient comparison with other multimodal forecasting models, reducing insight into how the proposed method performs relative to similar approaches.\\n\\n3. **Insufficient Dataset Description**: Essential dataset details, including sample counts, history length, and forecasting horizon, are not provided. Additionally, the impact of the forecasting horizon on prediction quality remains underexplored.\\n\\n4. **Simplistic Experimental Instructions**: The experimental instructions are overly simplistic, failing to reflect realistic scenarios. The limited set of training instructions may also suggest that simpler alternatives for instruction embedding could have been more effective.\\n\\n5. **Circular Evaluation**: The evaluation datasets have been tailored from existing datasets based on the training instructions intended for evaluation, which creates a circular reasoning issue that undermines the reliability of the evaluation setup. A similar statement about the order compliance rate metric can also be made.\\n\\n**Minor Issues:**\\n\\n1. The paper inconsistently uses closing quotes (\\\") instead of opening quotes (``) in multiple locations, including but not limited to lines 197, 203, and 213.\\n\\n2. Textual citations, rather than parenthetical citations, would be more suitable for the references in lines 117 to 128, enhancing the readability and flow of the text.\\n\\n3. Appropriate citations are not provided for the original dataset sources.\", \"questions\": \"Questions:\\n1. The choice of order compliance rate as an evaluation metric is intriguing. This metric appears specifically tailored to the instructions outlined in the paper, which may limit its applicability to real-world scenarios. Could you clarify the advantages this metric offers over existing metrics for evaluating forecasting performance?\", \"suggestions\": [\"Benchmark results against a broader selection of existing multimodal forecasting models to enhance comparative insights.\", \"Include a detailed discussion of the dataset, covering aspects such as sample size, history length, and forecasting horizon.\", \"If feasible, incorporate more complex textual cues in the experiments to better reflect real-world forecasting challenges.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response To Reviewer mT1k\", \"comment\": \"### Thank you for your precious comments!\\nThe following are our responses to your concerns.\\n\\n---\\n\\n### Comment 1: \\n*There seems to be a mismatch between the described technique used to apply the modification (equation 3), and the examples shown (figure 3). According to the equation, the data in the forecast window should be a pure affine function, without any of the noise shown in figure 3.*\\n\\n**Response:** \\nWe thank the reviewer for highlighting this point. Equation (3) indeed describes a pure affine function; however, to ensure an increasing trend in certain time series, we allowed the slope \\\\(A\\\\) to vary within the forecast window. This deliberate choice introduces some noise, as shown in Figure 3, but it demonstrates the model\\u2019s ability to adapt to evolving trends. For clearer examples without slope variation, please refer to Figure 10 in the Appendix (page 20). We have clarified this in the revised manuscript on page 4, where a comment is added to clarify this important point.\\n\\n---\\n\\n### Comment 2: \\n*While the model is tested against other multimodal text+timeseries models, it should also be tested against pure LLM approaches: just plugging the text and the history in a prompt for GPT-4 or Llama 3, and looking at the generated output. While such an approach won't scale to long series, recent work has shown it to be surprisingly decent at forecasting under textual instructions. See: LLM Processes by Requiema 2024 for a slightly more complex approach, but there may be more appropriate references for the more direct one.*\\n\\n**Response:** \\nWe thank the reviewer for this suggestion. Although LLMs have shown some capability with time series data, they are fundamentally designed for language tasks and often struggle with numerical accuracy, as highlighted by several studies. This limitation motivated our dual-channel approach, where time series and text are processed in specialized frameworks, leveraging an expert model for each modality.\\n\\nIn *Table 2 (page 9)*, we conduct an experiment by directly prompting Llama-3.1-8B-Instruct to perform these tasks. The results show good understanding of simple instructions but significant failures in most tasks. This approach also demonstrates instability, as the output may be challenging to directly utilize due to the mixture of numerical values and textual content. The designed prompt is shown in *Appendix I*.\\n\\nIn the Appendix (see *Table 5*), we present an experiment comparing our dual-channel method with GPT4TS\\u2014a purely LLM-based model for time series (for descriptive text instead of instructions). Despite GPT\\u2019s strong backbone (compared to Qwen used for our approach), our method outperforms it, confirming that dual-channel designs are more effective for multimodal tasks. Additionally, as the reviewer suggested, we conducted a new experiment focused on instruction-based tasks, which is the main focus of the paper. Here, our model also demonstrated superior compliance rates compared to Qwen4TS, underscoring the advantages of dual-channel methods for instruction-based text. These results are now included in the revised manuscript in *Table 2 (page 9)*.\\n\\n\\n---\\n\\n### Comment 3: \\n*Hyperparameters and training curriculum for the timeseries portion of the model are missing.*\\n\\n**Response:** \\nWe thank the reviewer for pointing this out. The missing experimental details regarding the hyperparameters and training curriculum for the time series feature extractor are now included in the updated version of the manuscript. These details are provided in *Section I of Appendix*, where we outline the specific settings and the training procedure used for this part of the model, as well as the experimental setup for training UniTime and Qwen4MTS for the new additional experiments.\"}", "{\"summary\": \"The article describe a new model to incorporate textual information with a more traditional timeseries forecasting model. It does so by combining an embedding computed from the historical numerical data with an embedding computing from the textual information. The combined embedding is then used to generate the forecast.\\n\\nThe model is tested both on real-world data, where it shows competitive results, and on generated data, where it is shown to follow the instructions included in the textual information.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"1. It is good that zero shot examples of descriptions which have not been provided in the training set have been tested with. Without those, the narrow set of possible descriptions could have made it impossible to check whether the result quality came from the model overfitting on these descriptions or not.\\n2. Training the model using generated data and computing how well the model follows the instructions is a relatively clean way to do a proof of concept of the idea, which is appropriate currently, as the field of using LLM and timeseries models together is still in its infancy.\", \"weaknesses\": \"1. There seems to be a mismatch between the described technique used to apply the modification (equation 3), and the examples shown (figure 3). According to the equation, the data in the forecast window should be a pure affine function, without any of the noise shown in figure 3.\\n2. While the model is tested against other multimodal text+timeseries models, it should also be tested against pure LLM approaches: just plugging the text and the history in a prompt for GPT 4 or LLama 3, and looking at the generated output. While such an approach won't scale to long series, recent work have shown it to be surprisingly decent at forecasting under textual instructions. See: LLM Processes by Requiema 2024 for a slightly more complex approach, but there may be more appropriate references for the more direct one.\\n3. Hyperparameters and training curiculum for the timeseries portion of the model are missing.\", \"questions\": \"1. For table 4, can you provide the same results, but for your model instead of only for TimeLLM? It would make it more obvious whether your model succeed on those tasks with incorrect textual information.\\n2. For real world dataset, was the textual information always constant (as shown in section B.3) for each dataset? This would allow a finetuned model to fully ignore it, since it could bake said information in its weights anyway.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer We4d\", \"comment\": \"Dear reviewer:\\n I wonder if our response can solve your concerns! Thank you!\"}", "{\"comment\": \"Thank you for addressing my comments and revising the paper.\\n \\nWhile most of my concerns have been addressed, I still have some questions regarding the core contribution of this work. In Sections 2.2, the authors claim to present a _novel framework_ for forecasting that leverages textual instructions and demonstrate its superior performance over existing frameworks in Table 2. However, the claimed novelty of this framework compared to existing methodologies remains unclear. I request the authors to further **elaborate on the framework's uniqueness as compared to the existing methods** and include the **parameter counts for both their model and the benchmarks** to confirm that the improvements are not merely due to higher computational resources.\\n\\nFurthermore, despite the authors' appreciating the raised typographical issues, such issues have continued into the revised sections, with some of them listed below:\\n1. Incorrect quotations- line 113\\n2. Incorrect parenthetical citations- line 127\\n3. Spelling errors- line 107 (_success_ - _sucess_)\\n\\nSuch typos, though minor, are numerous enough to raise concerns about the overall credibility of the paper. For now, I will maintain my current score.\"}", "{\"title\": \"Response to Reviewer GGqR - question part\", \"comment\": \"### Question 1:\\n*How would the proposed model perform without access to textual inputs or under noisy conditions? If textual instructions are incomplete, inconsistent, or contain noise, how would the model's performance be affected? This scenario is particularly relevant in high-stakes areas like finance, where decision-making often involves dealing with imperfect information. What measures have been taken to ensure robustness against these issues, which are common in real-world data?*\\n\\n**Response:** \\nWe appreciate the reviewer\\u2019s question on robustness under imperfect or noisy textual inputs. In \\u201cwhat-if\\u201d scenarios, even if expert instructions are incomplete or contain intentional inaccuracies, the model\\u2019s primary goal is to follow these inputs accurately. This is often precisely what an expert wants\\u2014simply to test various hypothetical scenarios by observing how the model behaves under different instructions, rather than to ensure perfectly accurate instructions. Our model\\u2019s high compliance rate shows that it reliably adheres to these inputs, enabling experts to evaluate potential outcomes and behaviors without needing to formalize each scenario mathematically within the framework.\\n\\nAlthough handling incomplete or inconsistent text is not the main focus of our work, we recognize its relevance. In the Appendix (_Table 5, Page 18_), we include experiments comparing our model's performance with and without textual inputs. These results show that even in cases without explicit instructional text, our method outperforms purely time-series models, highlighting the value of informative text for forecasting accuracy. This additional evaluation demonstrates the model\\u2019s ability to handle various text input scenarios, further affirming its robustness and versatility.\\n\\n---\\n\\n### Question 2:\\n*How does the proposed framework address interpretability in practice? The paper claims that incorporating textual instructions enhances interpretability, but there are no concrete demonstrations of how this contributes to meaningful insights for domain experts. Could you provide explicit examples or user studies that validate this claim? Without such evidence, how can the claim of improved interpretability be substantiated?*\\n\\n**Response:** \\nWe thank the reviewer for raising this question on interpretability. The interpretability of our framework primarily comes from its capacity to directly link predictions to textual instructions. For example, if a linear growth is predicted, it can be traced back to specific input instructions, providing clear insight into why a particular behavior was forecasted. Additionally, attention map visualizations (see _Figure 21, Page 27_) reveal that the model highlights relevant keywords from the instructions, further demonstrating its focus on critical components of the input. This not only makes the reasoning process transparent but also allows experts to verify that the model is attending to meaningful terms.\\n\\nOur framework\\u2019s generalization ability also contributes to interpretability, as it shows the model\\u2019s capacity to apply learned associations to new contexts, indicating it understands the core instruction beyond specific examples. While we acknowledge the value of explicit user studies, these elements collectively provide substantial interpretability by aligning the model's outputs directly with expert input and highlighting the key instructions that guide predictions.\"}", "{\"title\": \"Response To Reviewer mT1k - question parts\", \"comment\": \"### Question 1:\\n*For Table 4, can you provide the same results, but for your model instead of only for TimeLLM? It would make it more obvious whether your model succeeds on those tasks with incorrect textual information.*\\n\\n**Response:** \\nWe thank the reviewer for this insightful suggestion. As part of our ongoing experiments, we aim to address this by evaluating our model under the same conditions. Specifically, for the same base time series (same context length), we provide multiple different instructions and observe that our model achieves a high compliance rate, demonstrating its ability to follow instructions accurately. In contrast, TimeLLM exhibits lower compliance, highlighting the importance of the instructions. We appreciate the reviewer\\u2019s input, and we have now included these results in the updated manuscript for even more models (Qwen4TS and UniTime) (see *Table 2, page 9*).\\n\\n---\\n\\n### Question 2:\\n*For the real-world dataset, was the textual information always constant (as shown in Section B.3) for each dataset? This would allow a fine-tuned model to fully ignore it, since it could bake said information in its weights anyway.*\\n\\n**Response:** \\nWe thank the reviewer for raising this important point. In our experiments, the format of the textual prompts varied across datasets, ensuring that the model was exposed to different types of instructions and did not simply memorize a single format. However, within each dataset, the prompt format remained consistent to ensure a fair evaluation of the model's ability to handle the specific instructions. This approach prevents the model from \\\"baking\\\" the textual information into its weights and ensures it adapts to diverse instructions. We have clarified this in *Section B.3, page 19* of the updated manuscript, where we add another prompt format for an additional dataset (Traffic).\"}", "{\"comment\": [\"# Dear Reviewers\", \"We would like to express our sincere gratitude for your time and valuable feedback on our paper. Below, we provide a detailed response to each of the reviewers' comments and outline the revisions made to address their concerns. We have followed your suggestions and believe the manuscript has significantly improved as a result.\", \"## Summary of Changes\", \"In response to the reviewers' comments, we have made the following key changes to the manuscript:\", \"Clarification of the dataset generation to handle the question on the apparent mismatch between Equation 3 and Figure 3 on **page 4** (**Reviewer mT1k**).\", \"Additional experiments by adding two baselines (UniTime and Qwen pure LLM) for text instruction in Table 2 on **page 9** (**Reviewers mT1k, GGqR, YdJR, and We4d**).\", \"Incorporation of additional state-of-the-art methods, including other multimodal papers such as UniTime, on **page 3** (**Reviewers We4d and GGqR**).\", \"Detailed architecture design (number of layers, architecture, hyperparameters) for the time series portion, as well as a detailed description of the datasets in **Section I of the appendix** (on **page 30**) (**Reviewers mT1k and GGqR**).\", \"Clarification about AutoPrompter on **page 5** (**Reviewer GGqR**).\", \"**Added more related work** on traditional time-series prediction models in **Section 2.1**.\", \"The detailed responses to each reviewer\\u2019s comments are provided below.\"]}", "{\"metareview\": \"This paper presents a text-instruction following time-series forecasting model that uses LLMs to combine embeddings from the historical numerical data with embeddings from the textual information. While the problem space of integrating text and time-series is very timely and practical, several reviewers expressed concerns (that the AC agrees with) the rather simplistic textual instructions and evaluation methodology, and the limited real-world practical applicability of modeling textual inputs in terms of such clear, simple instructions. While the authors do also experiment with descriptive text prompts in the appendix, this seems more of an afterthought and seems disconnected from the main theme of the paper. I would urge the authors to resubmit the paper to a future venue, after performing a more comprehensive evaluation using both descriptive text prompts and a richer, noisier set of textual instructions.\", \"additional_comments_on_reviewer_discussion\": \"Some reviewers asked for adding more baselines and multimodal approaches to the evaluation. Furthermore, reviewers had several questions on the dataset generation, and the potential of leakage in the evaluation methodology and metrics. The authors addressed the authors these points satisfactorily. However they were not able to alleviate the concerns of some reviewers around the limited applicability of the instruction-following setting and the lack of a richer set of text prompts in the evaluation framework.\"}", "{\"comment\": \"Thanks for answering my comments and questions.\", \"w1\": \"Figure 10 seems to indicate that \\\"Linear Growth\\\" is gotten by adding an affine function to the original data. This may be compatible with equations (4) and (5) (which are not very clear), but is definitely still not compatible with equation (3) and Figure 3 (which is not compatible with such a transformation). Please make sure that the method you used to modify your data is accurately documented in your paper to allow other researchers to reproduce your work.\", \"w2\": \"Thanks for adding the extra experiments. Larger scale LLMs would have performed better, but would have been more costly.\", \"w3\": \"Thanks for adding the extra details.\", \"q1\": \"While Table 8 does show the impact of changing the way the textual information is phrased (and shows that it has an impact on the model), it doesn't outright give incorrect information (as in Table 4). I would still be curious to see the result of such an experiment for your model.\", \"q2\": \"Is the model trained with all the datasets at once, or one version is trained for each dataset? (This may already been mentioned in the paper.) It is true that varied prompts for each dataset would help in the former case, it wouldn't have an impact in the later case.\\n\\nOverall, I would still need to think whether to increase your paper score or keep it as is. I will have to take some time to reread my fellow reviewers comments and reread the paper before doing so.\"}", "{\"title\": \"Response to Reviewer We4d\", \"comment\": \"### Comment 1:\\n*Major issues*\\n\\n**Response:** \\nWe thank the reviewer for their insightful comments. We address each of the points raised as follows:\\n\\n- **Incomplete Literature Coverage:** \\n We appreciate the reviewer pointing out the omission of key references such as UniTime. We have incorporated this important work, along with other relevant multimodal forecasting models, into the updated version of the paper. The related work section has been expanded to provide a more comprehensive overview of the field and to better position our contribution in relation to existing approaches.\\n\\n- **Limited Comparative Analysis:** \\n We appreciate the reviewer\\u2019s insightful feedback regarding the need for broader comparisons with other multimodal forecasting models. To address this concern, we have expanded our comparisons to include additional multimodal models, particularly in scenarios where descriptive text is provided alongside time series data. Notably, we have included comparisons with GPT-4TS, TimeLLM, and purely time-series-based models (_Table 5_), and our method outperforms these models in the tasks considered, demonstrating its high performance even outside the scope of instruction-based tasks. Additionally, we present a detailed evaluation of our method against UniTime and Qwen4MTS in _Table 2 (Page 9)_ to further address the reviewer\\u2019s concerns.\\n\\n- **Insufficient Dataset Description:** \\n We apologize for the lack of detail regarding the datasets. We have taken care to include all relevant dataset details\\u2014such as sample counts, history length, and forecasting horizon\\u2014in the updated manuscript. A detailed analysis is presented in Section I of the Appendix.\\n\\n- **Simplistic Experimental Instructions:** \\n While the instructions considered in our work may seem simple, they represent an essential first step toward more complex scenarios where complete document instructions can be provided. Our research serves as a foundational effort, demonstrating that even with simple cases, several challenges must be addressed using existing algorithms like TimeLLM. We show that these challenges can be effectively managed with a specifically designed architecture.\\n\\n Although there is room for improvement in handling instructions, our paper already considers scenarios where the test text instructions differ from those used during training. These instructions are somewhat complex, combining several base instructions, and our methods demonstrate good generalization capabilities (_Table 3_). This success suggests promising perspectives toward the ultimate goal of tackling any instruction, regardless of complexity.\\n\\n- **Circular Evaluation:** \\n We appreciate the reviewer\\u2019s feedback regarding the evaluation datasets and the potential for circular reasoning. To address this concern, we highlight that in _Table 3_, we provide an assessment of the generalization capabilities of our model. In this evaluation, the training and test instructions differ significantly, which we believe is a fair and robust way to evaluate the model's ability to handle instructions adequately.\\n\\n This approach ensures that our model is tested on scenarios not explicitly covered during training, providing a more reliable measure of its performance in real-world applications. We are confident that this evaluation demonstrates the model's generalization capabilities and addresses the reviewer\\u2019s concerns about the reliability of our evaluation setup.\\n\\n---\\n\\n### Comment 2:\\n*Minor issues*\\n\\n**Response:** \\nThank you for your detailed feedback on our paper. We appreciate your time and effort in providing these valuable comments. We will take each of your suggestions into account in the updated version of the paper.\\n\\n- Regarding the inconsistent use of closing quotes (`\\\"`) instead of opening quotes (`\\u201c`) in multiple locations, including but not limited to lines 197, 203, and 213, we will ensure that the correct quotation marks are used throughout the manuscript.\\n- We agree with your suggestion to use textual citations rather than parenthetical citations for the references in lines 117 to 128. This will enhance the readability and flow of the text.\\n- Additionally, we will provide appropriate citations for the original dataset sources.\\n\\nThank you once again for your constructive feedback. We look forward to addressing these points in the revised version of the paper.\\n\\n---\"}", "{\"summary\": \"The paper presents Text-Informed Time Series Prediction (TITSP), a multimodal framework that integrates textual context with time series data using Large Language Models (LLMs). The approach involves two stages: AutoPrompter, which aligns time series data with text embeddings, and a refinement stage that incorporates task-specific textual instructions to enhance prediction accuracy and interpretability. While TITSP proves particularly effective for context-rich forecasting tasks, by demonstrating improved performance under specific settings against some other methods.\", \"soundness\": \"2\", \"presentation\": \"1\", \"contribution\": \"2\", \"strengths\": [\"A novel two-stage framework for integrating temporal and textual data.\", \"A data generation workflow for instruction-based forecasting, compatible with LLMs.\", \"Comprehensive ablation studies and comparative evaluations demonstrating the effectiveness of TITSP.\"], \"weaknesses\": [\"**Technical Contributions are Incremental** The proposed approach lacks significant technical innovation. Integrating LLMs with time series is an incremental step rather than a groundbreaking contribution. The use of cross-attention and VQ-VAE offers no substantial improvement beyond established techniques.\", \"**Poor Structure and Clarity** The paper is poorly organized, with unclear explanations and an incoherent flow. The motivation and rationale for the proposed method are inadequately communicated, and critical components like AutoPrompter are explained in a convoluted manner, hindering comprehension.\", \"**Inadequate Experiments** Experimental validation is weak, relying heavily on synthetic datasets that limit the assessment of practical applicability. Comparisons to related state-of-the-art methods are lacking, and statistical significance testing is absent, making it difficult to validate the performance claims.\", \"**Superficial Related Work** The related work section lacks depth and fails to properly differentiate the contribution from prior research. Key works are missing or insufficiently discussed, weakening the justification for originality.\", \"**Numerous Typos and Lack of Polish** Frequent typos (e.g. citation mistaches in line 54-55), poorly formatted figures(fig. 6), and poorly constructed tables suggest a lack of careful proofreading, which detracts from the overall quality and credibility of the paper.\", \"**Insufficient Practical Insights** The claimed interpretability through textual integration lacks demonstration. There are no real-world examples showing how domain experts would benefit from these insights, making the practical value of TITSP unclear.\"], \"questions\": [\"**How would the proposed model perform without access to textual inputs or under noisy conditions?** If textual instructions are incomplete, inconsistent, or contain noise, how would the model's performance be affected? This scenario is particularly relevant in high-stakes areas like finance, where decision-making often involves dealing with imperfect information. What measures have been taken to ensure robustness against these issues, which are common in real-world data?\", \"**How does the proposed framework address interpretability in practice?** The paper claims that incorporating textual instructions enhances interpretability, but there are no concrete demonstrations of how this contributes to meaningful insights for domain experts. Could you provide explicit examples or user studies that validate this claim? Without such evidence, how can the claim of improved interpretability be substantiated?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"I appreciate the authors' response. However, the issue of data leakage and the resulting concerns regarding practical applicability remain unresolved.\\n\\nI understand the authors' claim that their model can effectively capture textual instructions about future time series, outperforming previous models. Nonetheless, in real-world scenarios, it is highly improbable that we would have access to highly accurate future textual data. This implies that the textual information representing future trends in practical applications is likely to be significantly inaccurate, resulting in a substantial difference between the training and testing datasets of the framework and real-world conditions. Even if we hypothetically assume that we could reliably obtain highly accurate textual instructions about the future, would we then only require manual intervention based on these precise descriptions to make predictions?\\n\\nIn summary, I am concerned that there is a substantial disconnect between the future information used for training and testing and the future textual descriptions that will be available in practical applications, which raises questions about the actual efficacy of the proposed framework.\"}", "{\"title\": \"Response to Reviewer We4d\", \"comment\": \"### Comment 3:\\n*The choice of order compliance rate as an evaluation metric is intriguing. This metric appears specifically tailored to the instructions outlined in the paper, which may limit its applicability to real-world scenarios. Could you clarify the advantages this metric offers over existing metrics for evaluating forecasting performance?*\\n\\n**Response:** \\nWe appreciate the reviewer\\u2019s thoughtful question. The order compliance rate was specifically chosen as an evaluation metric because the primary goal of our work is to assess how well the model adheres to the given textual instructions. Since we are the first to propose text instruction-based forecasting, existing metrics for traditional forecasting tasks may not fully capture the performance of a model that must follow complex, hypothetical instructions. \\n\\nThe compliance rate, therefore, offers a tailored and effective way to measure this adherence. While it may seem specific to our setting, we believe it is a novel and valuable metric for this new approach. Its design enables us to quantify how well the model aligns with textual instructions, which is central to the novelty of our framework. We hope this clarifies why this metric is appropriate and meaningful in the context of our work.\\n\\n### Suggestions of the reviewer:\\n*Benchmark results against a broader selection of existing multimodal forecasting models to enhance comparative insights.\\nInclude a detailed discussion of the dataset, covering aspects such as sample size, history length, and forecasting horizon.\\nIf feasible, incorporate more complex textual cues in the experiments to better reflect real-world forecasting challenges.*\\n\\n**Response:**\\nWe thank the reviewer for their valuable suggestions. In the updated version of the paper (_Table 2, page 9_), we have included an exhaustive comparison with other methods tailored for text-instruction-based forecasting, including Qwen4MTS and UniTime, as also requested by Reviewer mT1k. Additionally, we extend the evaluation to descriptive tasks, where text serves as a description rather than an instruction. In the Appendix (_Table 5_), we compare our method against several benchmarks (including GPT4TS, TimeLLM, and time series-based forecasters) and show that, even without textual instructions (with descriptive texts about the task), our approach outperforms other models, demonstrating its broader applicability. \\n\\nWe have also added more details on the datasets used, including sample size, history length, and forecasting horizon in Section I of the Appendix.\\n\\nFurther More, our method could handle long sequence input which is shown in appendix G, and it shows a good attention map that it can extract key words regarding to the user instruction so that it could be used in real life.\"}", "{\"comment\": \"Thank you for your insightful response. We truly appreciate your feedback, which encourages us to further elaborate on our approach.\\n\\nTITSP is designed to revolutionize time-series prediction by making it interactive. In contrast to traditional deep learning methods, which often fail in real-world applications due to their reliance solely on historical data, TITSP empowers users to actively participate in the prediction process. Deep learning models tend to learn only from past patterns, requiring users to repetitively re-engineer features and retrain models to have better performance.\\n\\nIn our framework, we enable users to engage directly with the prediction process, not by assuming perfect textual descriptions, but by allowing them to inject their expert judgment and professional knowledge. This integration of human insight into time-series forecasting marks a significant departure from conventional methods, creating a more dynamic and adaptive approach to prediction.\"}", "{\"title\": \"Response To Reviewer mT1k - result table\", \"comment\": \"For convenience, we also provide table 5 in the paper about test results\\n\\n### Table: Comparison of Compliance Rate (CR) and MSE for TITSP, Time-LLM, Qwen4MTS, UniTime, and Llama-3.1-8B across various instructed actions with highlighted best (in **bold**) and second-best (in _underlined_) results.\\n\\n| **Instruction** | **TITSP (CR)** | **TITSP (MSE)** | **Time-LLM (CR)** | **Time-LLM (MSE)** | **Qwen4MTS (CR)** | **Qwen4MTS (MSE)** | **UniTime (Qwen) (CR)** | **UniTime (Qwen) (MSE)** | **Llama-3.1-8B (CR)** | **Llama-3.1-8B (MSE)** |\\n|---------------------------------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------------------|-------------------------|------------------------|------------------------|\\n| Linear Growth and Linear Decay | **0.83** | **1.15** | 0.38 | 3.45 | _0.69_ | _1.90_ | 0.54 | 2.73 | 0.32 | 4.95 |\\n| Linear Growth and Linear Decay | **0.79** | **1.17** | 0.49 | 2.85 | **0.79** | _1.34_ | 0.57 | 2.28 | 0.41 | 2.80 |\\n| Linear Trend Up | _0.90_ | **1.03** | 0.63 | 1.71 | 0.76 | _1.08_ | 0.63 | 1.65 | **0.91** | 1.15 |\\n| Linear Trend Down | **0.87** | **0.88** | 0.64 | 1.55 | 0.71 | 1.36 | 0.51 | 1.59 | _0.85_ | _0.92_ |\\n| Exponential Growth | **0.89** | **1.33** | 0.58 | 2.59 | _0.63_ | _2.07_ | 0.60 | 2.38 | 0.58 | 2.35 |\\n| Exponential Decay | **0.84** | **1.25** | 0.56 | 2.26 | 0.67 | 2.10 | _0.69_ | _2.05_ | 0.46 | 2.39 |\\n| Keep Stable | **0.98** | _0.35_ | 0.76 | 0.76 | 0.93 | 0.48 | 0.83 | 0.62 | _0.95_ | **0.33** |\\n| Decrease Amplitude | **0.90** | _0.91_ | 0.85 | 1.04 | **0.90** | **0.84** | 0.79 | 1.09 | 0.52 | 1.89 |\\n| Increase Amplitude | **0.94** | **0.94** | 0.79 | 1.20 | _0.89_ | _0.96_ | 0.81 | 1.03 | 0.75 | 1.35 |\\n| Logarithmic Growth | _0.77_ | _1.65_ | 0.49 | 2.31 | **0.79** | **1.55** | 0.60 | 1.73 | 0.55 | 1.94 |\\n| Logarithmic Decay | **0.83** | **1.68** | 0.48 | 2.19 | _0.81_ | _1.69_ | 0.67 | 2.04 | 0.63 | 2.60 |\"}", "{\"title\": \"Response to Reviewer YdJR - result table\", \"comment\": \"For convenience, we also provide table 5 in the paper, which is about the test results. We add more experiments to show the effectiveness of our algorithm.\\n\\n### Table: Comparison of Compliance Rate (CR) and MSE for TITSP, Time-LLM, Qwen4MTS, UniTime, and Llama-3.1-8B across various instructed actions with highlighted best (in **bold**) and second-best (in _underlined_) results.\\n\\n| **Instruction** | **TITSP (CR)** | **TITSP (MSE)** | **Time-LLM (CR)** | **Time-LLM (MSE)** | **Qwen4MTS (CR)** | **Qwen4MTS (MSE)** | **UniTime (Qwen) (CR)** | **UniTime (Qwen) (MSE)** | **Llama-3.1-8B (CR)** | **Llama-3.1-8B (MSE)** |\\n|---------------------------------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------------------|-------------------------|------------------------|------------------------|\\n| Linear Growth and Linear Decay | **0.83** | **1.15** | 0.38 | 3.45 | _0.69_ | _1.90_ | 0.54 | 2.73 | 0.32 | 4.95 |\\n| Linear Growth and Linear Decay | **0.79** | **1.17** | 0.49 | 2.85 | **0.79** | _1.34_ | 0.57 | 2.28 | 0.41 | 2.80 |\\n| Linear Trend Up | _0.90_ | **1.03** | 0.63 | 1.71 | 0.76 | _1.08_ | 0.63 | 1.65 | **0.91** | 1.15 |\\n| Linear Trend Down | **0.87** | **0.88** | 0.64 | 1.55 | 0.71 | 1.36 | 0.51 | 1.59 | _0.85_ | _0.92_ |\\n| Exponential Growth | **0.89** | **1.33** | 0.58 | 2.59 | _0.63_ | _2.07_ | 0.60 | 2.38 | 0.58 | 2.35 |\\n| Exponential Decay | **0.84** | **1.25** | 0.56 | 2.26 | 0.67 | 2.10 | _0.69_ | _2.05_ | 0.46 | 2.39 |\\n| Keep Stable | **0.98** | _0.35_ | 0.76 | 0.76 | 0.93 | 0.48 | 0.83 | 0.62 | _0.95_ | **0.33** |\\n| Decrease Amplitude | **0.90** | _0.91_ | 0.85 | 1.04 | **0.90** | **0.84** | 0.79 | 1.09 | 0.52 | 1.89 |\\n| Increase Amplitude | **0.94** | **0.94** | 0.79 | 1.20 | _0.89_ | _0.96_ | 0.81 | 1.03 | 0.75 | 1.35 |\\n| Logarithmic Growth | _0.77_ | _1.65_ | 0.49 | 2.31 | **0.79** | **1.55** | 0.60 | 1.73 | 0.55 | 1.94 |\\n| Logarithmic Decay | **0.83** | **1.68** | 0.48 | 2.19 | _0.81_ | _1.69_ | 0.67 | 2.04 | 0.63 | 2.60 |\"}", "{\"title\": \"Response to Reviewer YdJR\", \"comment\": \"Dear reviewer: I wonder if our response can solve your concerns! Thank you!\"}", "{\"summary\": \"The paper introduces Text-Informed Time Series Prediction (TITSP), a novel two-stage framework that enhances time series forecasting by integrating domain-specific textual information. The paper demonstrates that TITSP significantly outperforms traditional and existing multimodal approaches, improving both predictive accuracy and interpretability.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"1. The paper presents a novel approach to time series forecasting by integrating textual instructions, which is a creative extension of existing multimodal time series models. The introduction of a two-stage framework and the focus on instruction-based forecasting address a significant gap in the field.\\n2. The paper is well-written and logically organized. The figures and tables are clear and effectively support the text. The problem formulation and the description of the methodology are easy to follow.\", \"weaknesses\": \"1. Given the synthetic data generation process, how can the authors ensure that there is no data leakage between the text data and forecasting targets? Could the authors provide a detailed explanation of the data generation process to address this concern?\\n2. How practical is the proposed approach in real-world scenarios where textual instructions may not always be available or may be ambiguous? Could the authors discuss the potential limitations and challenges in deploying TITSP in practical applications?\\n3. Has the model been tested on any other multimodal time series analysis tasks beyond forecasting? If not, what are the potential challenges in applying TITSP to other tasks?\", \"questions\": \"Please see the weaknesses.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"The paper does not raise any significant ethical concerns.\", \"rating\": \"5\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer GGqR\", \"comment\": \"Thank you for your precious comments!\\n\\n### Comment 1:\\n*Technical Contributions are Incremental* \\n_The proposed approach lacks significant technical innovation. Integrating LLMs with time series is an incremental step rather than a groundbreaking contribution. The use of cross-attention and VQ-VAE offers no substantial improvement beyond established techniques._\\n\\n**Response:** \\nWe appreciate the reviewer\\u2019s feedback. While it is true that certain components of our architecture, such as cross-attention and VQ-VAE, are established techniques, our contribution lies in the development of a novel methodological framework. This framework includes a tailored data pipeline, innovative architecture design, and a comprehensive evaluation approach, all specifically geared towards integrating text-based instructions with time series forecasting.\\n\\nWe believe this is a significant contribution because it provides a structured approach to applying language models in the context of time series, which is a growing area of interest with wide applicability in fields like supply and demand forecasting. The ability to define and manipulate hypothetical scenarios through textual instructions opens new avenues for adaptable and context-sensitive forecasting models. We are confident that this framework will be valuable to the community, as it sets a foundation for future work in this space.\\n\\n---\\n\\n### Comment 2:\\n*Poor Structure and Clarity* \\n_The paper is poorly organized, with unclear explanations and an incoherent flow. The motivation and rationale for the proposed method are inadequately communicated, and critical components like AutoPrompter are explained in a convoluted manner, hindering comprehension._\\n\\n**Response:** \\nWe are sorry to hear that some aspects of the paper were unclear. We greatly value the reviewer\\u2019s feedback and would be very interested in a constructive discussion to better understand which specific parts of the motivation and rationale were difficult to follow. In the related work, we have added a paragraph to highlight the main motivation and differences of our work compared to state-of-the-art methods, especially those targeting instruction-based forecasting.\\n\\nRegarding the reviewer\\u2019s concern with the explanation of AutoPrompter, we would like to clarify its purpose: AutoPrompter serves as a bridge that translates the time series data into the text embedding space. By quantizing the time series space, we map it into a compressed semantic space, which may have contributed to some of the complexity in the explanation. We have added additional clarifications in the updated version of the paper (_Page 5_) to ensure this concept is more accessible and the overall flow is clearer.\\n\\nWe appreciate the reviewer\\u2019s insights and hope the revised manuscript will address these concerns effectively.\\n\\n---\\n\\n### Comment 3:\\n*Inadequate Experiments, Superficial Related Work, Numerous Typos and Lack of Polish, Insufficient Practical Insights*\\n\\n**Response:** \\n\\n**Inadequate Experiments:** \\nWe acknowledge the reviewer\\u2019s concern about the reliance on synthetic datasets. While we agree that real-world data is crucial for evaluating practical applicability, synthetic datasets were used primarily to demonstrate the model\\u2019s capacity to handle controlled scenarios where the impact of specific factors can be isolated. We have now included benchmarks against state-of-the-art approaches (Llama-3.1-8B-instruct, Qwen4MTS and Unitime in Table 2). The results are highly stable and reproducible, with substantial performance margins over competitors, which reduce the necessity for statistical significance testing.\\n\\n**Superficial Related Work:** \\nWe have expanded the related work section to better differentiate our approach from prior research, particularly in the integration of text and time series. References such as Unitime have been added to strengthen the justification for our originality.\\n\\n**Insufficient Practical Insights:** \\nThe interpretability of our framework lies in facilitating interaction between expert users and the model through hypothetical scenarios. For example, the model generates forecasting scenarios based on textual instructions about supply and demand conditions, enabling experts to evaluate potential outcomes. This is particularly useful in fields like supply chain management, where generating and testing \\\"what-if\\\" scenarios through textual inputs offers clear practical benefits.\\n\\n---\"}", "{\"title\": \"Response to Reviewer YdJR\", \"comment\": \"### Comment 1:\\n*Given the synthetic data generation process, how can the authors ensure that there is no data leakage between the text data and forecasting targets? Could the authors provide a detailed explanation of the data generation process to address this concern.*\\n\\n**Response:** \\nWe thank the reviewer for raising this important question. While the concern about data leakage is valid in many contexts, it is not a central issue in our case. The primary goal of our work is to assess the model's adherence to specific textual instructions rather than predict the target based solely on the time series data. To clarify, consider three samples with identical context length: a deterministic machine learning model would typically produce the same forecast for these samples. However, by adding textual instructions that specify a particular scenario or condition, we introduce a new layer of information that the model must adhere to. This is not a data leakage problem but rather a way of interacting with the model through different hypothetical scenarios. \\n\\nThe compliance rate is explicitly defined to measure how well the model follows these instructions while preserving the underlying time series structure. Thus, the model\\u2019s ability to follow instructions is the focus, rather than predicting targets based solely on historical data. This being said, while the focus of the paper is on text-instructed problems, we also perform experiments in the Appendix on other types of data where the text describes the task (e.g., domain, forecasting type, features). In these cases, the dataset contains no leakage, and our proposed algorithm outperforms the state-of-the-art.\\n\\n---\\n\\n### Comment 2:\\n*How practical is the proposed approach in real-world scenarios where textual instructions may not always be available or may be ambiguous? Could the authors discuss the potential limitations and challenges in deploying TITSP in practical applications?*\\n\\n**Response:** \\nWe appreciate the reviewer\\u2019s thoughtful question. As discussed in our response to the first comment, the primary purpose of our approach is to evaluate how well the model adheres to specific textual instructions in controlled scenarios. While textual instructions are central to this evaluation, we acknowledge that in real-world applications, such instructions may not always be available or could be ambiguous. \\n\\nOne potential limitation is the reliance on clear, actionable instructions, which may not always be feasible in dynamic or unstructured environments. Additionally, the model\\u2019s performance may be affected by the quality and specificity of the textual input. However, our framework is designed to handle a wide range of instruction formats and adapt to different hypothetical scenarios, making it flexible for practical deployment. We also envision that the model could be augmented with supplementary mechanisms (e.g., user feedback loops or clarification prompts) to address ambiguity in real-world use cases.\\n\\n---\\n\\n### Comment 3:\\n*Has the model been tested on any other multimodal time series analysis tasks beyond forecasting? If not, what are the potential challenges in applying TITSP to other tasks?*\\n\\n**Response:** \\nWe appreciate the reviewer\\u2019s question. While our model has primarily been tested for forecasting tasks, extending it to other multimodal time series analysis tasks such as classification presents some challenges. In classification, the output is often constrained to predefined labels, which limits the flexibility needed to explore different hypothetical scenarios through textual instructions. This makes it difficult to leverage the full potential of our approach in such tasks.\\n\\nHowever, as demonstrated in the Appendix, our framework can be extended to scenarios where instead of instructions, we incorporate other types of additional information related to the task at hand. In these cases, our model still outperforms existing methods, suggesting that the framework has potential beyond forecasting, even for tasks with more constrained output spaces. Furthermore, we believe that imputation tasks could be a natural extension of our framework, as it can easily accommodate missing data by conditioning on other available information, showing that our approach is adaptable to different problem settings.\"}" ] }
00ezkB2iZf
MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering
[ "Robert Ness", "Katie Matton", "Hayden Helm", "Sheng Zhang", "Junaid Bajwa", "Carey Priebe", "Eric Horvitz" ]
Large language models (LLM) have achieved impressive performance on medical question-answering benchmarks. However, high benchmark accuracy does not imply robust performance in real-world clinical settings. Medical question-answering benchmarks rely on assumptions consistent with quantifying LLM performance but that may not hold in the open world of the clinic. Yet LLMs learn broad knowledge that could help the LLM perform in practical conditions regardless of unrealistic assumptions in celebrated benchmarks. We seek to quantify how robust LLM medical question-answering benchmark performance is to violations of unrealistic benchmark assumptions. Specifically, we present an adversarial method that we call MedFuzz (for medical fuzzing). MedFuzz attempts to modify benchmark questions in ways aimed at confounding the LLM. We demonstrate the approach by targeting unrealistic assumptions about patient characteristics presented in the MedQA benchmark. Successful "attacks" modify a benchmark item in ways that would be unlikely to fool a medical expert but nonetheless "trick" the LLM into changing from a correct to an incorrect answer. Further, we present a non-parametric test for calculating the statistic significance of a successful attack. We show how to use calculate "MedFuzzed" performance on a medical QA benchmark, as well to find individual cases of statistically significant successful attacks. The methods show promise at providing insights into the ability of an LLM to operate robustly in more realistic settings.
[ "large language model", "adversarial machine learning", "automatic red teaming" ]
Reject
https://openreview.net/pdf?id=00ezkB2iZf
https://openreview.net/forum?id=00ezkB2iZf
ICLR.cc/2025/Conference
2025
{ "note_id": [ "v9MPHvKB79", "lm5Z9TT5lJ", "XeqVl4YWA6", "TeO25XUwES", "Se1GK2iVy4", "RoHfL53eaw", "NvznhEBuAw", "M9K6lklgnS", "GmeNsvtsrK", "EU8ZZoZ56t", "Al5ULDSosk", "2XGhQ3OS4y" ], "note_type": [ "official_review", "official_comment", "official_comment", "official_review", "meta_review", "official_comment", "official_review", "official_review", "decision", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1730166452694, 1732591625014, 1732578540365, 1730704092447, 1734303577687, 1732688098051, 1730619456036, 1730387849286, 1737524123423, 1732587300700, 1732583176623, 1732633177998 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11424/Reviewer_6sJS" ], [ "ICLR.cc/2025/Conference/Submission11424/Authors" ], [ "ICLR.cc/2025/Conference/Submission11424/Authors" ], [ "ICLR.cc/2025/Conference/Submission11424/Reviewer_GdQb" ], [ "ICLR.cc/2025/Conference/Submission11424/Area_Chair_6xkD" ], [ "ICLR.cc/2025/Conference/Submission11424/Reviewer_6sJS" ], [ "ICLR.cc/2025/Conference/Submission11424/Reviewer_Dsnm" ], [ "ICLR.cc/2025/Conference/Submission11424/Reviewer_EcvC" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11424/Authors" ], [ "ICLR.cc/2025/Conference/Submission11424/Authors" ], [ "ICLR.cc/2025/Conference/Submission11424/Reviewer_Dsnm" ] ], "structured_content_str": [ "{\"summary\": \"This paper proposes MedFuzz, a novel approach designed to evaluate the robustness of large language models (LLMs) in medical question-answering contexts. MedFuzz introduces controlled perturbations in input text by adding patient characteristics (PC) and social bias information to simulate real-world variability and challenges encountered in clinical settings.\\n\\nThe authors highlight the limitations of traditional medical benchmarks that often simplify clinical scenarios and position MedFuzz as an advancement towards \\u201cbeyond-the-benchmark\\u201d evaluations. Specifically, the paper presents experiments assessing LLMs' responses to MedFuzz perturbations and evaluates the consistency of chain-of-thought (CoT) explanations under these conditions. The study offers a new perspective on testing LLM robustness by addressing potential risks in clinical decision-making when assumptions of canonical benchmarks do not hold.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": \"1. This paper introduces MedFuzz, a novel approach for testing the robustness of large language models (LLMs) in clinical contexts, which addresses the simplifications found in traditional benchmarks. MedFuzz is distinct in its approach by adding specific patient characteristics and social bias information to simulate the complexity of real-world clinical scenarios. This innovative framework offers a new direction for assessing LLM robustness by examining potential vulnerabilities in medical question-answering settings.\\n\\n2. The paper clearly explains the concept of MedFuzz and its application, particularly in using patient characteristics and bias elements to test model robustness. The experimental procedures and components are consistently described, making the study's objectives and methodology easy for readers to follow.\\n\\n3. MedFuzz presents a significant contribution as it provides a framework to evaluate how LLMs may perform in real clinical settings, beyond simplified benchmarks. This work has high practical relevance for the safe implementation of LLMs in healthcare by strengthening robustness assessment and reducing potential errors. It contributes an essential tool for enhancing LLM applicability in clinical practice, highlighting the importance of robustness in medical AI.\", \"weaknesses\": \"1. The authors clarified the distinction between robustness and generalization in their response, emphasizing that robustness in this study is tied to resilience against violations of benchmark assumptions. This clarification addresses the original concern, though ensuring this explanation is explicitly included in the revised manuscript remains important.\\n2. The authors clarified that MedFuzz is designed to surface biases already present in the target model and does not introduce confusion into clinical decision-making itself. While this explanation addresses the primary concern, ensuring that the revised manuscript provides sufficient justification for the use of specific patient characteristics as perturbations will remain critical.\\n3. The authors acknowledged that the scale of perturbations could be further refined and suggested this as future work. Including a brief discussion in the revised manuscript about the implications of perturbation scale would strengthen this point.\\n4. The authors agreed to expand the analysis of CoT fidelity to include unsuccessful attacks in addition to successful ones. This addition should provide a more comprehensive baseline for evaluating the vulnerabilities identified by MedFuzz. Ensuring this analysis is effectively implemented in the revised manuscript will be crucial.\", \"questions\": \"1. It would be helpful to have specific examples illustrating the risks posed by the simplified assumptions in traditional benchmarks within clinical settings. For instance, if omitting certain patient characteristics or clinical contexts could lead to diagnostic errors, providing these examples would clarify the importance of this study for readers and highlight its relevance.\\n\\n2. I am curious whether the patient characteristics (e.g., age, gender) and social bias information added as perturbations in MedFuzz genuinely act as confusion factors within actual clinical environments. These details often serve as crucial data points in clinical decision-making, so further explanation on how these elements were deemed appropriate as confusion-inducing factors would enhance the clinical validity of this study.\\n\\n3. A clear explanation regarding the rationale for setting the perturbation iteration count to K=5 would be beneficial. For instance, do you have experimental results comparing the initial attack (K=1) with subsequent attacks (K=5) to illustrate how the LLM maintains performance with increasing perturbation levels? Such a comparison could provide a more reliable basis for evaluating the impact of iteration count on robustness in this study.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"In the MedFuzz study, patient characteristics (PC) such as age, gender, race, and socioeconomic factors are added as perturbations to induce confusion in LLMs. One specific example presented by the authors is the use of \\u201cexcessive hospital service usage by low-income patients.\\u201d This type of information could inadvertently reinforce social biases or perpetuate negative perceptions about certain demographic groups, rather than reflect clinical validity or fairness.\\n\\nWhen such characteristics are introduced as confusion-inducing factors, there is a risk that essential background information\\u2014critical for accurate diagnosis and treatment\\u2014could lead to biased outcomes. Therefore, further clarification and evaluation are needed to ensure that MedFuzz\\u2019s inclusion of such data as perturbations aligns with clinical relevance and fairness, and to mitigate any potential reinforcement of harmful social biases in the model.\\n\\nNo further questions\", \"rating\": \"5\", \"confidence\": \"5\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer 6sJS's feedback\", \"comment\": \"We appreciate the reviewer\\u2019s detailed evaluation and thoughtful feedback on our manuscript. Below, we address each of the concerns and questions raised.\\n\\n### **1. Definition of Robustness vs. Generalization**\\nRobustness in the context of MedFuzz refers to the resilience of a model\\u2019s performance statistic (e.g., accuracy) when assumptions underlying the benchmark are violated in real-world settings. This includes maintaining performance when diagnostically irrelevant details are introduced. By contrast, generalization in statistics refers to the ability of a model to perform well on unseen data sampled from the same distribution as the training data.\\n\\nWe will revise the manuscript to clarify this distinction and emphasize that robustness here is specifically tied to the benchmark\\u2019s assumptions and the model\\u2019s ability to handle clinically irrelevant or misleading details.\\n\\n### **2. Patient Characteristics and Bias**\\nWe regret that we weren't more clear about how the use of patient characteristics (PC) in MedFuzz does not introduce or reinforce bias. Rather, it aims to surface biases already implicit in the target model. MedFuzz is a diagnostic tool to evaluate LLMs before they are deployed in clinical decision-making scenarios.\\n\\nImportantly, MedFuzz itself does not serve answers to questions in clinical settings\\u2014it evaluates the robustness of models that do. In that evaluation, it does not change or modify the target model. This distinction will be made clearer in the revised manuscript.\\n\\n### **3. Scale of Perturbations**\\nWe did not constrain the proportion of added text during perturbations because, in our experience, the length of added text was still well within the length of the context windows for the target LLMs. We agree with the reviewer that analyzing how varying amounts of irrelevant information impact target model performance would be valuable. We will include this as a suggestion for future work.\\n\\n### **4. Chain-of-Thought Fidelity**\\nThe CoT analysis focused on successful attaks to demonstrate that inspecting CoT explanations alone is insufficient to reveal the vulnerabilities surfaced by MedFuzz. We will expand the analysis to include unsuccessful attacks as well.\\n\\n### **5. Examples of Benchmark Assumption Errors**\\nThe manuscript cites examples of errors that are not caught by traditional benchmark evaluation due to simplifying assumptions in those benchmarks. For example, we cite references showing GPT-3 demonstrating biases toward certain patient groups. We will expand on these examples in the revised manuscript to better illustrate the risks posed by such assumptions.\\n\\n### **6. Ethical Concerns Regarding Bias**\\nWe address the ethical concerns raised by clarifying that MedFuzz is designed to surface biases in the target model, not to introduce or reinforce them. MedFuzz operates as an evaluation tool, diagnosing vulnerabilities in LLMs that may be deployed in clinical settings.\\n\\nWe explicitly state that failure to surface such biases does not imply their absence. Furthermore, MedFuzz is not intended to answer medical questions but rather to assess the robustness of models that do. We will revise the manuscript to better highlight these points and allay concerns about bias reinforcement.\\n\\n### **7. Perturbation Iteration Count \\\\(K\\\\)**\\nThe results for different values of \\\\(K\\\\) are shown in Figure 2. We demonstrate how performance changes as the number of perturbation iterations increases, providing empirical support for the choice of \\\\(K=5\\\\) as a practical balance between computational cost and perturbation effectiveness. We will ensure that this explanation is clearly referenced in the manuscript.\\n\\n### **Revisions to the Manuscript**\\nTo address the reviewer\\u2019s feedback, we will:\\n1. Clarify the distinction between robustness and generalization, explicitly tying robustness to real-world violations of benchmark assumptions.\\n2. Emphasize that MedFuzz evaluates models to surface implicit biases, rather than introducing or reinforcing them.\\n3. Expand on examples of errors caused by traditional benchmark assumptions to strengthen the motivation for MedFuzz.\\n4. Expand the analysis of CoT fidelity to cover questions where attacks were unsuccessful, to establish a baseline for the analysis.\\n5. Ensure the ethical role of MedFuzz as an evaluation tool is clearly communicated.\\n6. Expand upon the discussion of \\\\(K\\\\) and iteration counts and how to select ideal values of \\\\(K\\\\).\\n\\nWe appreciate the reviewer\\u2019s constructive feedback, which has helped us identify areas to strengthen the manuscript and address concerns. These revisions will further clarify MedFuzz\\u2019s methodology, ethical considerations, and contributions to LLM robustness evaluation.\"}", "{\"title\": \"Response to Reviewer GdQb: Questions about P-value Distribution, Trends in Duccessful Attacks, and Human Evaluation\", \"comment\": \"We thank the reviewer for their thorough evaluation and constructive feedback on our manuscript. Below, we address each point raised:\\n\\n### **Faithfulness of Reformulated Questions**\\nWe acknowledge the concern about the reliance on the attacker LLM (GPT-4) to maintain the medically correct answer while generating fuzzed questions. In our approach, the attacker LLM is explicitly prompted to preserve the correct answer, which is provided during the fuzzing process. This ensures that the fuzzes remain anchored to the original question's intent. Furthermore, we rely on the attacker LLM\\u2019s demonstrated human-level accuracy on the benchmark as an assumption for generating high-quality fuzzes at scale. \\n\\nGiven the large scale of MedFuzz experiments, manual quality assurance for every fuzzed question is infeasible. However, our workflow incorporates user inspection for particularly interesting or insightful cases, with the final judgment of whether an attack is \\u201cfair\\u201d being left to human reviewers. While this assumption introduces some dependence on the attacker LLM\\u2019s capabilities, we believe it is reasonable for achieving scalability.\\n\\n### **Distribution of P-Values**\\nTo address the request for an impression of the p-value distribution, we ran an analysis on a run where GPT-4 was the target model, resulting in 85 successful attacks. Below are summary statistics of the p-values:\\n\\n- **Min:** 0.0, **5%:** 0.0, **25%:** 0.10, **Median:** 0.40, **Mean:** 0.408, **75%:** 0.63, **Max:** 1.0\\n\\nTo explore trends, we categorized successful attacks into two groups: (1) significant attacks (\\\\( p < 0.01 \\\\)) and (0) insignificant attacks. We calculated an odds ratio for being in group 1 vs group 0. Analysis revealed that topics like \\u201crash,\\u201d \\u201csubstance abuse,\\u201d and \\u201cultrasound\\u201d were more than twice as likely to fall into the significant group, while others like \\u201cHIV,\\u201d \\u201cbreastfeeding,\\u201d and \\u201cchronic kidney disease\\u201d were also overrepresented. However, we recognize that calculating p-values for these odds ratios would conflate p-values used as thresholds, resulting in unsound statistical inference (i.e., p-hacking).\\n\\nA more robust approach, which we plan to explore in future work, would involve estimating the success probability for specific topics using repeated attacks on individual questions. \\n\\n### **Evaluation of Chain of Thought (CoT) Faithfulness**\\nThe reviewer highlights an important point about the assessment of CoT faithfulness. In our study, we manually evaluated CoTs from successful attacks, focusing on whether the added fuzz content was explicitly referenced. This process was conducted by inspecting each CoT explanation and verifying its alignment with the fuzzed information that caused the incorrect response. \\n\\n### **Human Performance Comparison and Quality Control**\\nWe recognize the value of including human medical experts to evaluate the quality of fuzzed questions. However, due to resource constraints, this was not feasible in the current study. We plan to include human evaluation in future work to provide an additional layer of validation for the attacker LLM\\u2019s performance and the robustness of fuzzed questions.\\n\\nRegarding quality control, we rely on the attacker LLM\\u2019s prompt-engineered constraints to ensure that generated fuzzes are medically plausible and consistent with the original question\\u2019s correct answer. This reliance on a high-performing LLM is a tradeoff we make to scale the MedFuzz method across a large dataset like MedQA.\\n\\n### **Analysis of Errors and Robustness to Specific Problem Types**\\nThe reviewer\\u2019s request for a more granular analysis of errors and model vulnerabilities is well-taken. Our exploratory analysis of attack outcomes highlighted certain topics (e.g., \\u201crash,\\u201d \\u201csubstance abuse,\\u201d \\u201cultrasound\\u201d) that appear more susceptible to significant attacks. However, as noted above, a more rigorous approach to topic-level success probability estimation is necessary for conclusive insights. We plan to develop a framework for repeated attacks on specific topics, allowing us to model robustness conditional on problem type.\\n\\n### **Future Directions**\\nThe reviewer\\u2019s suggestions align with our broader vision for improving MedFuzz. Specifically, we aim to:\\n1. Incorporate human evaluation for assessing question quality and attack outcomes.\\n2. Develop Monte Carlo-based topic-specific estimates of probability of attack success to give insight into which topics are vulnerable.\\n\\nWe believe these improvements will address the limitations of the current study and enhance the utility of MedFuzz for evaluating medical LLMs.\\n\\n**Summary**\\nWe appreciate the reviewer\\u2019s insightful feedback and have outlined both responses to the identified weaknesses and concrete steps for future work. While some limitations remain, we are confident that MedFuzz provides valuable insights into LLM robustness and look forward to building on this foundation.\"}", "{\"summary\": \"This paper investigates the robustness of large language models in handling medical QA tasks by introducing a new evaluation method, MedFuzz. For each multiple-choice question in the original benchmarks, MedFuzz uses an LLM (referred to as the attacker LLM) to reformulate questions by adding patient characteristics that may introduce social bias without affecting the clinical decision-making process. If the target LLM answers correctly, the attacker LLM is prompted to generate additional distracting questions based on the target LLM\\u2019s feedback. Additionally, a non-parametric statistical significance test was developed by prompting the attacker LLM to create questions with patient characteristics that avoid social bias. Using this evaluation method, the authors tested seven LLMs and found a significant performance drop across all models. Moreover, they observed that when current LLMs answer incorrectly, they tend not to reference the added biased information, indicating inconsistency in faithfully adhering to the clinical decision-making process.\", \"soundness\": \"2\", \"presentation\": \"2\", \"contribution\": \"2\", \"strengths\": [\"This paper examines the robustness of LLMs in the clinical decision-making process, a critical aspect of their application in the medical domain.\", \"The evaluation results demonstrate that current LLMs lack robustness in the clinical decision-making process, offering valuable insights for the development of medical LLMs.\"], \"weaknesses\": [\"A major weakness of this paper is the faithfulness of the reformulated questions. The proposed MedFuzz method relies solely on prompt engineering with the attacker LLM (GPT-4) to modify original MedQA questions, making the attack process difficult to control. The attacker LLM may potentially alter critical information in the original questions, resulting in less reliable reformulated questions. The example in Section 3.1 also demonstrates that the attacker LLM added extensive information about the patient\\u2019s family medical history, consultation history, and medication history. These details are highly relevant in real clinical diagnosis and can significantly influence a doctor\\u2019s assessment of the patient\\u2019s condition.\", \"Moreover, although the authors propose a non-parametric statistical significance test, they do not provide the full distribution of p-values across the MedQA benchmark. In line 485, they note that for the successful attacks they selected, the p-values are <1/30, 0.1, 0.16, 0.5, and 0.63. Here, the p-value represents the probability that a control fuzz is more challenging than the original fuzz. Therefore, cases with p-values of 0.5 and 0.63 suggest that the performance decline in the target LLM is due to the perturbations themselves, rather than social bias.\", \"For the study of target LLM's faithfulness, it is important to also study the proportion of CoT that mentions the critical information in the original MedQA benchmark for comparison with the results provided in Figure 2B. Additionally, the authors should provide more information to help readers understand the specific process of this study. For example, how many cases were analyzed? Was the determination of whether fuzzed information was included made manually, or was an automated algorithm used?\"], \"questions\": \"1. The authors need to provide further experiments and analyses to demonstrate the reliability of the questions generated by this method, such as incorporating the performance of human experts or introducing relevant methods for quality control of the questions in the methods section.\\n\\n2. Also, more analysis of the evaluation results should be included. For example, what are the main types of errors introduced by attacks across different turns? Which specific diseases or problem types is the target LLM less robust against? By supplementing these analyses, further insights can be provided for the development of medical LLMs.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"metareview\": \"In this paper, the authors propose MedFuzz, an LLM-based technique to provide challenging medical questions. The technique allows to test medical LLMs at scale on a multichoice QA dataset, by modifying factors deemed as 'irrelevant' to the final diagnosis.\\n\\nWhile all reviewers found the approach interesting, significant concerns were raised in terms of the validity of the generated attacks, and whether these would be clinically plausible or indeed maintain the same response. Further concerns have been pointed out in terms of analyzing the effect of the choice of the LLM attacker, the irrelvant factors or the choice of K, or the choice of MedQA as the single dataset considered. Most of these are pointed to as future work by the authors, but I believe more depth is needed before publication. Therefore I recommend rejection.\", \"additional_comments_on_reviewer_discussion\": \"The authors provided a response very late in the discussion period, which prevented revewers from engaging in depth. They also did not provide a revised version of the paper, which the reviewers would have wished to see.\"}", "{\"title\": \"MedFuzz: Exploring the Robustness of Large Language Models in Medical Question Answering\", \"comment\": \"Thank you for your detailed and thoughtful responses to my feedback. I look forward to reviewing your revised manuscript, which I trust will sufficiently address the concerns raised, particularly regarding the distinction between robustness and generalization, the role of patient characteristics in MedFuzz, and the ethical considerations surrounding bias.\"}", "{\"summary\": \"The paper proposes an automated red teaming approach to attack LLMs. They attempt this in the medical context by modifying medical Q&A datasets (specifically on MedQA), by violating assumptions that do not hold good in real life settings. The goal of MedFuzz is to make LLMs provide the wrong answer while ensuring that clinicians can still provide the right answer. The authors have identified a crucial problem with the evaluations of LLMs in the medical domain and provided a way to generate a more realistic dataset to aid subsequent LLM evaluation. The novelty lies in the proposed dataset from MedFuzz and the statistical evaluation used to check if the attack was successful.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"\\u2022\\tClarity: The paper is well written and easy to follow along. The authors have given adequate and clear examples at appropriate locations in the draft to aid readability. Good use of illustrations after consultation with domain experts (clinical collaborators in this case). The authors have also acknowledged the limitation of using contaminated training data.\\n\\n\\u2022\\tOriginality: The idea to use social biases a clever way to incorporate real life information into the MedQA dataset.\\n\\n\\u2022\\tQuality: The evaluation involves the use of proprietary vs open source and general purpose vs domain specific models. The experiment settings for reproducibility like temperature have been provided. The approach should be easy enough to reproduce. \\n\\n\\u2022\\tSignificance: The authors have tackled a relevant problem that needs to be addressed, given the rapid pace of the domain.\", \"weaknesses\": \"\\u2022\\tIn the case of MedQA dataset, the authors have identified a social bias which may be present in real life situations, which are removed in the original benchmark. It is unclear how easy it is to identify and exploit such peculiarities in other medical benchmarking datasets like MedMCQA[1], PubMedQA[2] etc.\\n\\n\\u2022\\tThe authors create the adversarial questions by an iterative multi-turn approach. Although the authors allude to the target LLM forgetting about previous Q&A attempts, would the approach be better validated if the evaluation is done in a single-turn manner?\\n\\n\\u2022\\tThe authors, in step 4, only validate the statistical significance of 4 individual interesting cases. How would this change if considered for all successful cases?\\n\\n[1] Pal A, Umapathi LK, Sankarasubbu M. Medmcqa: A large-scale multi-subject multi-choice dataset for medical domain question answering. InConference on health, inference, and learning 2022 Apr 6 (pp. 248-260). PMLR.\\n\\n[2] Jin Q, Dhingra B, Liu Z, Cohen WW, Lu X. Pubmedqa: A dataset for biomedical research question answering. arXiv preprint arXiv:1909.06146. 2019 Sep 13.\", \"questions\": \"\\u2022\\tThe authors can clarify how their approach to adversarial attacks differs from the misinformation approach in [1].\\n\\n\\u2022\\tThe authors can clarify why unfaithfulness of generated responses is a crucial dimension to consider.\\n\\n\\u2022\\tSection 2.2 Lines 104: The authors mention \\u201ctwo ways\\u201d in which MedFuzz differs from other adversarial ML approaches, though only one distinction is clear in the draft. I\\u2019m assuming the second way is the use of semantically coherent changes to the text. These few lines can probably be rephrased to add clarity.\\n\\n\\u2022\\tThe authors have conducted their experiments on the MedQA dataset and taken advantage of a constraint imposed in the curation of this dataset. The authors could potentially add broad guidelines to expand on the fuzzing idea for other medical datasets. \\n\\n\\u2022\\tHow can the authors ensure that the GPT-4 generated attack retains the same answer as the original QA pair being perturbed? Is there a possibility to evaluate this with the help of domain experts?\\n\\n\\u2022\\tHow is the value of K set in Algorithm 1? This can be elaborated on in the Appendix section.\\n\\n\\u2022\\tDoes the finding that LLM CoT does not mention the fuzzed information provide a way forward to identify adversarial inputs?\\n\\n\\u2022\\tAnother interesting avenue would be to examine how different kinds of LLMs perform when used as the attacking/ target LLM. For example, can a smaller model generate adversarial inputs faster than a larger model like GPT-4?\\n\\n\\u2022\\tMinor Comment: Is line 10 a duplicate of line 11 in Algorithm 1?\\n\\n[1] Han T, Nebelung S, Khader F, Wang T, M\\u00fcller-Franzes G, Kuhl C, F\\u00f6rsch S, Kleesiek J, Haarburger C, Bressem KK, Kather JN. Medical large language models are susceptible to targeted misinformation attacks. npj Digital Medicine. 2024 Oct 23;7(1):288.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"NA. Authors have provided an ethics statement in the draft as well.\", \"rating\": \"6\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"summary\": \"The paper proposes an adversarial method for evaluating LLM performance on medical question-answering benchmarks to assess their robustness in real-world clinical settings. The idea is to automatically generate new question-answer pairs from the existing benchmark such that they still represent realistic scenarios (e.g., including additional patient information) but the answers remain the same. The experiment results demonstrate that various baseline LLMs can be tricked into providing incorrect answers.\", \"soundness\": \"3\", \"presentation\": \"3\", \"contribution\": \"2\", \"strengths\": [\"The idea of the paper is interesting -- existing medical QA datasets are fairly simplified and may not appropriately represent real-world clinical settings. Thus, there is a need to understand how safe LLM usage is for the medical domain via robustness analysis.\", \"The intuition for the adversarial biasing comes from medical domain understanding of the benchmark constructions.\", \"Authors benchmark 3 closed LLMS and 4 open-source, medically fine-tuned LLMs.\"], \"weaknesses\": [\"One of the major claims of the method is that it will generate new questions that are semantically coherent and will not fool clinicians. However, there is no empirical proof that this is the case other than the analysis of a handful of case studies (one is presented in the main text). The prompt contains instructions for the attacker LLM it should not change the default answer but GPT-4 is not always guarenteed to follow the instructions or have all the correct medical knowledge appropriate.\", \"Is there a reason why general domain adversarial prompting wasn't shown to be sufficient? A few studies are listed in 2.2 (first sentence) but no preliminary studies or experimental studies are shown to support this.\", \"GPT-4 is chosen as the attacker LLM, but the question is why aren't other open-source models explored? In looking at OpenBIOLLM-70B performance, this also looks like a reasonable comparison to try and might even generate harder cases with less of the computation cost.\", \"One of the comments in the introduction was the that existing benchmarks are not challenging enough including reducing real-life clinical situations to canonical multiple choice questions. Is there a reason why only one dataset was included and it was a multiple-choice one?\", \"The statistical test is proposed to identify the significance of a successful attack using control fuzzes and to select the case studies, but what about the general distribution for the MedQA dataset? How stable is it broadly in identifying how significant a successful attack is? I understand this can be computationally intensive and costly but that also raises a bit of questions regarding the applicability of the method if it can't be done at scale.\", \"The presentation could have been improved to provide some intuition at the beginning with potentially a simpler case study where less was added to make the LLM response change. Similarly, some of the text is written in a less digestible format. For example, the introduction of the test statistic could be improved by introducing notation first and then how you might compute it to understand what the statistic is looking to capture.\", \"The citation format is incorrect, please use \\\\citep instead of \\\\cite as it detracts from readability.\"], \"questions\": [\"Why was MedQA the only dataset used? There are a few other multiple choice medical QA ones liked MedMCQA, PubMedQA, and MMLU Clinical topics. Why MedQA?\", \"Why was only GPT-4 used as the attacker LLM? Seemingly there are other open source ones that have just as much medical knowledge especially looking at the fine-tuned example.\", \"The workflow for the Step 2 is quite a few iterative turns. Are they all necessary to generate grounded ones? Is this workflow generalizable to other LLMs? Or is it GPT-4 specific?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"3\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Reject\"}", "{\"title\": \"Response to Reviewer EcvC\", \"comment\": \"We thank the reviewer for their detailed and thoughtful feedback on our manuscript. Below, we address each of the points raised and clarify our methodological choices. Firstly, we will correct the `\\\\citep` citation format throughout the manuscript.\\n\\n### **1. Empirical Validation of Semantically Coherent Fuzzes**\\nOur qualitative evaluation of the fuzzed questions relies on feedback from medical expert users who review successful attacks and assess their plausibility. While this approach is effective in surfacing interesting cases, we recognize the need for more systematic and quantitative validation to empirically verify that clinicians would consistently provide correct answers to fuzzed questions. This limitation will be addressed in future work as part of broader medical expert evaluation efforts.\\n\\n### **2. Use in Other Domains**\\nThe approach used in MedFuzz would apply in other domains. The approach relies on a domain expert who design the attacks and evaluate the results. The domain should also face serious robustness challenges in deploying in real-world settings. We chose medicine because of our experience with challenges in this domain.\\n\\n### **3. Choice of GPT-4 as the Attacker LLM**\\nWe selected GPT-4 as the attacker LLM due to its exceptional performance on MedQA. The attacker LLM must perform at least at a human level on the benchmark to effectively generate attacks that preserve the correct answer while introducing subtle, diagnostically irrelevant distractors. GPT-4 has also demonstrated performed well on theory of mind tests (Strachan et al., 2024), suggesting it would be good at generating ways to \\\"trick\\\" a test taker (the target LLM).\\n\\nWe recognize the potential value of exploring fine-tuned open-source models like OpenBioLLM-70B as attackers. However, current fine-tuned models lack performance on this benchmark and demonstrated generalist reasoning abilities in other settings. In future work, we aim to investigate whether fine-tuning open-source models can achieve similar attacker capabilities at a lower computational cost.\\n\\n### **4. Use of MedQA Dataset**\\nWe selected MedQA because it remained a challenging benchmark for state-of-the-art language models until GPT-4 and its direct competitors achieved near-human performance. To demonstrate MedFuzz\\u2019s value, the target LLM needed perform well enough on the benchmark to reveal meaningful vulnerabilities beyond just not understanding the questions.\\n\\nExpanding MedFuzz to other datasets like MedMCQA, PubMedQA, or MMLU Clinical Topics is an exciting direction for future work. The challenge with these datasets is that their variety in answer format and topic makes it challenging to identifying assumptions to violate that don't hold up in clinical settings. Relative to MedQA, they do not align as closely with our specific focus on robustness to real-world assumptions.\\n\\n### **5. Scalability of the Statistical Test**\\nThe computational expense of the statistical test arises primarily from generating control fuzzes. For multiple-choice benchmarks, we recommend generating at least 30 control fuzzes per attack to ensure granularity in p-values that align with conventional significance thresholds. In future work, we plan to extend this methodology to open-ended answers by embedding generated responses and deriving p-values from the embeddings. We leave this to future work because it will require theoretical treatment as well as a much larger number of control fuzzes it will improve applicability to a wider range of benchmarks.\\n\\n### **6. Iterative Workflow**\\nThe iterative workflow is not specific to GPT-4 and can be applied to other high-performing models like Claude Sonnet.\\n\\nIterative turns are necessary to refine fuzzes, leveraging feedback from the target LLM to ensure attacks are semantically coherent and effective. While single-shot attacks are simpler, they often fail to exploit the nuanced vulnerabilities in advanced LLMs, as demonstrated by our initial experiments with single-turn methods (these were negative results will be added to the appendix for transparency).\\n\\n### **7. Presentation and Intuition**\\nWe appreciate the reviewer\\u2019s suggestion to improve readability. In the revised manuscript, we will:\\n- Add a simpler case study early in the text to illustrate the method.\\n- Reorganize the introduction of the test statistic to introduce notation first, followed by an explanation of how it captures the significance of successful attacks.\\n\\n### **Summary**\\nWe are grateful for the reviewer\\u2019s feedback and have outlined revisions to enhance the clarity, scalability, and rigor of our work. These include:\\n1. Adding negative results from single-shot attacks to the appendix.\\n2. Revising sections for improved readability and presentation.\\n3. Expanding the discussion of iterative workflows and generalizability to other datasets and models.\\n\\nWe thank the reviewer for their thoughtful suggestions and are confident that these updates will strengthen the manuscript.\"}", "{\"title\": \"Response to Reviewer Dsnm\", \"comment\": \"We appreciate the reviewer\\u2019s thoughtful and constructive feedback on our manuscript. Below, we address each of the points raised and clarify aspects of our approach.\\n\\n### **1. Single-Turn Attacks**\\nWe initially explored single-shot attacks as a baseline approach. For example, with GPT-4 achieving 88.5% accuracy on the MedQA benchmark, we created several modified datasets that added diagnostically irrelevant patient characteristics. These datasets included patients characterized by varying socioeconomic statuses (e.g., affluent or low-income) and different racial or ethnic groups (Asian, Black, Hispanic, Native American, White), while excluding questions where race was clinically relevant. Across these datasets, no statistically significant change in accuracy was observed, indicating that such single-turn perturbations were too \\u201ceasy\\u201d for advanced models like GPT-4. These findings inspired MedFuzz's multi-turn approach. We can include these negative results in the appendix to demonstrate the progression of our method.\\n\\n### **2. Expanding Statistical Validation**\\nWe strongly adhere to the conventional statistical approach of having the end user evaluate interesting results and then using significance tests to validate that these findings contain signal. Expanding significance testing to a broader set of results would necessitate multiple comparisons corrections, which we leave for future work. Furthermore, ranking results based on p-values invites the risk of p-hacking, which we aim to avoid. \\n\\n### **3. Faithfulness of Responses**\\nThe faithfulness analysis is not intended as a core contribution but rather as a supplementary finding to highlight that vulnerabilities revealed by MedFuzz cannot simply be detected by inspecting CoT explanations. We agree this distinction can be emphasized more clearly in the manuscript.\\n\\n### **4. Comparison to Misinformation Attacks in Han et al. (2024)**\\nMedFuzz fundamentally differs from the approach in Han et al. (2024). The attacks described in that work aim to poison target LLMs by injecting falsehoods during model updates, requiring access to gradients and training data. In contrast, MedFuzz does not poison, it *detects* \\u201cpoison\\u201d, and does so without access to gradients or the data used for model updates.\\n\\nWe will clarify this distinction in the manuscript to address the reviewer\\u2019s concern.\\n\\n### **5. Guidelines for Expanding MedFuzz**\\nMedFuzz\\u2019s approach is generalizable to any domain where benchmarks rely on performance statistics (e.g., accuracy) that are contingent on assumptions not robust in real-world settings. While our study focuses on medical datasets requiring clinical expertise, domain experts in other fields can evaluate MedFuzz outputs for their respective use cases.\\n\\nFor medical benchmarks like MedMCQA and PubMedQA, the key challenge is identifying assumptions analogous to those violated in MedQA. We can provide broad guidelines for extending MedFuzz, such as focusing on domain-specific biases, assumptions, or oversights that simplify real-world complexity.\\n\\n### **6. Ensuring Correct Answers in Fuzzed Questions**\\nWe rely on the attacker LLM\\u2019s high performance on MedQA to generate effective attacks while preserving the correct answer. Medically experienced users validate successful attacks by inspecting outputs and running significance tests. When the attacker fails to \\u201cfuzz\\u201d the question well, this is discovered during that human evaluation step. We plan for extensive human medical expert evaluation in future work.\\n\\n### **7. Value of \\\\(K\\\\) in Algorithm 1**\\nThe ideal value of \\\\(K\\\\) (number of iterations) depends on the target model\\u2019s capabilities on a given benchmark. We will update the manuscript to suggest tuning \\\\(K\\\\) on a pilot subset of the data, increasing it incrementally until the marginal gains from additional iterations are no longer worth the computational expense.\\n\\n### **8. Smaller Models as Attackers**\\nWe believe that the attacker model must have reached human-level performance on the benchmark to identify effective attacks. This ensures that the attacker LLM can leverage its understanding of the benchmark and the correct answer to generate meaningful perturbations. Smaller models effectiveness would be limited by their lower performance on the benchmark. Exploring this tradeoff is a promising direction for future work.\\n\\n### **9. Redundancy in Algorithm 1**\\nThank you for pointing out the Algorithm 1, we will clarify this in the revised manuscript.\\n\\n### **Summary**\\nWe appreciate the reviewer\\u2019s positive assessment of the manuscript\\u2019s clarity, originality, quality, and significance. We have provided clarifications and plan to incorporate additional results and updates in the revised manuscript, including:\\n1. Negative results from single-shot attacks in the appendix.\\n2. Broader guidelines for applying MedFuzz to other domains.\\n3. Refinements to Algorithm 1 and expanded discussion on parameter tuning.\\n\\nWe thank the reviewer for their valuable feedback.\"}", "{\"comment\": \"Thank you for your detailed response! I shall wait for the revised paper to look at the edits made.\"}" ] }
00SnKBGTsz
DataEnvGym: Data Generation Agents in Teacher Environments with Student Feedback
[ "Zaid Khan", "Elias Stengel-Eskin", "Jaemin Cho", "Mohit Bansal" ]
The process of creating training data to teach models is currently driven by humans, who manually analyze model weaknesses and plan how to create data that improves a student model. Recent approaches using large language models (LLMs) as annotators reduce human annotation effort, but still require humans to interpret feedback from evaluations and control the LLM to produce data the student needs. Automating this labor-intensive process by creating autonomous data generation agents – or teachers – is desirable, but requires environments that can simulate the feedback-driven, iterative, closed loop of data creation. To enable rapid and scalable testing for such agents and their modules, we introduce DataEnvGym, a testbed of teacher environments for data generation agents. DataEnvGym frames data generation as a sequential decision-making task, involving an agent consisting of a data generation policy (which generates a plan for creating training data) and a data generation engine (which transforms the plan into data), inside an environment that provides feedback from a student. The agent’s end goal is to improve student model performance. Students are iteratively trained and evaluated on generated data, with their feedback (in the form of errors or weak skills) being reported to the agent after each iteration. As a general-purpose testbed, DataEnvGym includes multiple instantiations of teacher environments across three levels of structure in the state representation and action space, with varying levels of scaffolding support. More structured environments are based on automatically-inferred skills and offer a higher degree of interpretability and control over the curriculum. We support developing and testing data generation agents in four diverse tasks covering text, images, and actions (mathematics, programming, visual question answering, and tool-use) and test multiple student and teacher models. We find that example agents in our teaching environments can iteratively improve students across diverse tasks and settings. Moreover, we show that environments can teach different skill levels and can be used to test variants of key modules, pointing to directions of future work in improving data generation agents, engines, and feedback mechanisms. Project page: https://DataEnvGym.github.io.
[ "iterative data generation", "llm agent", "lifelong learning" ]
Accept (Spotlight)
https://openreview.net/pdf?id=00SnKBGTsz
https://openreview.net/forum?id=00SnKBGTsz
ICLR.cc/2025/Conference
2025
{ "note_id": [ "zpboemkkjR", "wqTNtVDwef", "wnsiUkDh00", "r8ZflFk3T7", "pOR42YNLtU", "m1iUqPHpwk", "la5jPwJU4g", "kokKFEn2fw", "i3QgWgrJff", "hWat8aFBRw", "h1qvpjhRP3", "ZqwAYtcmhv", "NEsxOTkkIV", "H2h2K6a8x5", "GMsjHLXdOx", "DjVKsUoFN2", "C3MhCuKhTf", "Bgr7Ol90m7", "Aq2tBtB0lt", "9OQJoesINr", "7XT4kLWV2f", "66buacQmRe", "4CnQpVCYkF", "13mj0Rtn5W" ], "note_type": [ "meta_review", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "decision", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment", "official_comment", "official_comment", "official_review", "official_comment", "official_comment", "official_comment" ], "note_created": [ 1734707665762, 1732591950077, 1732728515406, 1730852147119, 1732143304296, 1732143509313, 1732475842141, 1737524100832, 1730687830645, 1732707243013, 1732143067624, 1732563943240, 1732651200753, 1732356298896, 1730714354937, 1732507218015, 1732563878904, 1732317066420, 1732143438536, 1732565271093, 1730472742428, 1732143741634, 1732142922643, 1732728465105 ], "note_signatures": [ [ "ICLR.cc/2025/Conference/Submission11063/Area_Chair_eoLd" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_VQ9Y" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Program_Chairs" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_rVo8" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_wuGW" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_c5nB" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_rVo8" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_c5nB" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_rVo8" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_VQ9Y" ], [ "ICLR.cc/2025/Conference/Submission11063/Reviewer_wuGW" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ], [ "ICLR.cc/2025/Conference/Submission11063/Authors" ] ], "structured_content_str": [ "{\"metareview\": \"The paper frames the problem of automatic data generation (to improve a ML model) as a sequential decision making task, and provides Gym environments as well as LLM-based agents that are effective for them. The resulting datasets are shown to be effective for ML models in math reasoning, coding, and visual question answering domains.\\n\\nAll of the reviewers agreed that the paper is a solid contribution for ICLR. Reviewers praised the novelty and significance of the DataEnvGym, and anticipated follow-up works on data-generation agents.\\nThe authors addressed the weaknesses identified by the reviewers very skillfully during the rebuttal.\", \"additional_comments_on_reviewer_discussion\": \"Reviewers highlighted that there are missing experiment details (e.g. costs for running experiments) that the authors supplied during the rebuttal.\\nThe authors also compared against active learning and random baselines for data generation/selection, and the proposed agents were substantially better across multiple domains.\\nReviewers requested a more careful analysis of where the improvements were coming from (from the additional data, or from additional training) and the authors ran a thoughtful experiment to check.\\nAll of these findings during the rebuttal period substantially strengthened the paper.\"}", "{\"comment\": \"Thank you, reviewer VQ9Y! We truly appreciate your kind words and your effort in reviewing our work.\"}", "{\"comment\": \"Thank you Reviewer c5nB! We're glad that you felt our additional experiments were well-designed + sound. We truly appreciate your effort in reviewing the paper and are grateful for your thoughtfulness in increasing your score.\"}", "{\"summary\": \"This paper introduces Gym environments for data synthesis, framing the problem as sequential decision-making. In these environments, actions correspond to data-generation plans, and states represent the performance summary of a student model. The paper implements environments for three tasks: visual question answering (VQA), math, and code generation. Each environment offers three state representations: open-ended, skill-list, and skill-tree. Additionally, it proposes an LLM-based policy for data generation. Experimental results demonstrate that the LLM can make strategically effective choices based on environment-state information.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"4\", \"strengths\": [\"Tackle a timely and interesting problem.\", \"Provide the necessary infrastructure for the community to study the problem, opening up opportunities for future contributions.\", \"Consider various data generation strategies,\", \"Well-desgined experiments which demonstrate the effectiveness of the proposed approaches and conduct insightful analyses.\"], \"weaknesses\": [\"The paper is currently dense and difficult to follow. The introduction includes excessive implementation details, which detract from providing a simple, high-level intuition. Using a specific task example to guide readers through the core concepts would make the paper more accessible.\", \"The paper focuses solely on the data generation plan rather than a full, end-to-end data generation process. It relies on a fixed, off-the-shelf data-generation engine that cannot be modified. The authors should admit this limitation and discuss potential strategies for overcoming it.\", \"The quality of the data-generation engine can impact both student performance and the data-generation plan itself. Current approaches do not take into account the data-generation engine capabilities in the design of the policy or the evaluation of the student. For instance, poor student performance might result from the engine producing low-quality data on a specific skill, which could prompt the policy to avoid querying the engine for that skill.\", \"The learning procedure can be resource-intensive. The authors should report the time, cost, and computing resources used for the experiments.\"], \"questions\": [\"Is it possible to implement a random-policy baseline where you randomly chose a set of (naturally collected) datapoints from a data pool? The no-state baseline has flavor of this baseline but LLM-informed decisions could be biased.\", \"Is it possible to compare this approach with active learning, in which instead of doing data generation, you do data *selection* and ask models to generate only synthetic labels, but not synthetic inputs?\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer VQ9Y (Part 2/2)\", \"comment\": \"**Q1: We implement a random data selection baseline.**\\n\\nData selection is not possible in general as many domains lack a data source from which to easily sample data (e.g., LiveCodeBench). Therefore, we implement it for MATH, as a standard training set is available. The random selection baseline cannot improve a student when sampling an equivalent amount of data as the data generation baseline. The results are shown below.\\n\\nWe hypothesize that the random natural data selection baseline cannot improve a student like Gemma2-2B because easily accessible data pools (e.g., the training set for MATH) have already been exhausted by extensive LLM post-training \\\\[C$\\\\\\\\S$4.2,D$\\\\\\\\S$4\\\\] and so do not add new information.\\n\\n| | Before Training | Random Data Selection | Data Generation (Without State) | Data Generation (With State) | \\n|---|---|---|----|----| \\n| MATH Accuracy | 15.78 | 15.26 | 19.78 | **23.44** |\\n\\n**Q2: We implement a data selection agent.**\\n\\nWe implement data selection using prototypicality scores \\\\[A\\\\] which are standard for active learning. Similar to the random selection baseline, it is hard to improve a well-post-trained LLM like Llama3 or Gemma2 by using readily available data pools \\u2014 it is much easier to improve them using generated data. Even using the full training dataset cannot improve the student. This motivates our choice to tackle data generation rather than data selection. The training of open-source frontier models (Llama3, for example) includes significant post-training that subsumes publicly available data sources \\\\[B$\\\\\\\\S$4.2, C$\\\\\\\\S$4\\\\], making it hard to improve them with any amount of already existing data. \\n\\n| | Before Training | Data Selection (Prototypicality) | Full Training Dataset | Data Generation (Open-Ended) | \\n|---|---|-----|---|----| \\n| MATH Accuracy | 15.78 | 16.01 | 15.18 | **23.44** |\\n\\n\\\\[A\\\\] Sorscher et al., Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS 2022 Outstanding Paper Award \\n\\\\[B\\\\] Llama Team, AI @ Meta, The Llama 3 Herd of Models, arXiv 2024 \\n\\\\[C\\\\] Gemma Team, Google Deepmind, Gemma 2: Improving Open Language Models at a Practical Size, arXiv 2024\"}", "{\"title\": \"Response to Reviewer rVo8\", \"comment\": \"Thank you for stating that we make a good contribution to automated data generation and quality feedback\\\\!\\n\\n**W1: We clarify that our focus is on synthetic data generation for training purposes**. \\nWe have added and highlighted text to the introduction in L050-051 in the revised PDF that clarifies our focus is on data generation for training purposes.\\n\\n**W2: Related works.** \\nThanks for providing the additional related works that fit into our section focused on simulations/games with a fixed set of actions and skills. We have cited them and discussed them in Section 4 under the paragraph \\u201cTraining Environment Generation\\u201d (L503-506 in the revised PDF).\\n\\n**W3: We truncate experiments when performance decreases**. \\nThis is not a typo \\u2014 we truncate when performance begins to saturate. This is a choice we made to speed up experiments, but it is certainly possible to run environments for longer.\\n\\n**W4: We add repeated runs of experiments to characterize variance**. \\nWe repeated the open-ended experiments 3x for each domain. The open-ended environment is the least constrained so we expect the highest variance here. The overall improvement is higher than the variance in each case. \\n\\n| | Multimodal (GQA) | MATH | LiveCodeBench | \\n|---|---|---|---| \\n| Before Teaching | 44.18 | 15.78 | 16.50 | \\n| Open-Ended (3 runs) | 53.25 $\\\\\\\\pm$ 1.97 | 21.55 $\\\\\\\\pm$ 1.42 | 18.55 $\\\\\\\\pm$ 0.27 |\\n\\n**Q1: How does the performance of the data generation agents change over longer interactions?** \\nIt differs by environment. In the MATH and LiveCodeBench environments, the performance saturates with increased training. In the GQA environment, the performance seems to continue increasing up to 56%, but becomes more unstable (fluctuations up and down). \\n\\n**Q2: Is the total training data fixed in each allocation, or does it vary dynamically?** \\nWe set a maximum budget for an experiment and terminate the experiment when the budget is exhausted or the student saturates, whichever happens earlier. It is up to the policy to decide how it wants to allocate the budget across skills and iterations. In the baseline policies, we leave this decision up to the LLM except for the skill-tree environment, where we allocate data uniformly across skills and subskills because it is a reasonable baseline.\"}", "{\"title\": \"Followup to Reviewer rVo8\", \"comment\": \"Thanks for the great suggestions and continued engagement\\\\!\\n\\nWe've redone Fig. 5 on training dynamics in the style you've suggested: \\n\\n1. We've run extended experiments instead of truncating them. \\n2. We now show entire training curves to illustrate what the longer training progression would look like and add visual elements to show where the truncation occurred. \\n3. We've added error bands based on our re-runs to characterize variance. \\n4. We\\u2019ve rephrased Lines 460 \\\\- 465 as per your suggestions.\"}", "{\"title\": \"Paper Decision\", \"decision\": \"Accept (Spotlight)\"}", "{\"summary\": \"The paper presents DataEnvGym, a framework designed to simulate environments for data generation agents. These agents iteratively generate synthetic data to address weaknesses in student models, aiming to improve model performance across tasks like mathematics, programming, and visual question answering. DataEnvGym provides various structured environments (Open-Ended, Skill-List, and Skill-Tree) where data generation agents create targeted training examples based on feedback from the student model, offering a dynamic approach to automated model improvement.\", \"soundness\": \"2\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": [\"Good contribution to automated data generation for model improvement.\", \"Clearly written with structured sections explaining each environment type and experimental results.\"], \"weaknesses\": \"- The paper should clarify early on that the focus is on synthetic data generation for training purposes, as this underpins the motivation for the approach.\\n- Important related works on algorithms using feedback from training to generate the next training environments are missing [1, 2, 3, 4].\\n- Lines 460 - 465, I believe there is a typo whereby it says that \\u201ceach experiment is truncated once the performance consistently decreases for multiple iterations\\u201d. Should it be \\u201cincreases\\u201d?\\n- Repeated runs of experiments without confidence intervals will be valuable, especially since the variance of performance seems to be very high.\\n\\n[1] Sudhakaran, S., Gonz\\u00e1lez-Duque, M., Freiberger, M., Glanois, C., Najarro, E., & Risi, S. (2024). Mariogpt: Open-ended text2level generation through large language models. Advances in Neural Information Processing Systems, 36.\\n[2] Todd, G., Earle, S., Nasir, M. U., Green, M. C., & Togelius, J. (2023, April). Level generation through large language models. In Proceedings of the 18th International Conference on the Foundations of Digital Games (pp. 1-8).\\n[3] Zhang, J., Lehman, J., Stanley, K., & Clune, J. (2023). Omni: Open-endedness via models of human notions of interestingness. arXiv preprint arXiv:2306.01711.\\n[4] Faldor, M., Zhang, J., Cully, A., & Clune, J. (2024). OMNI-EPIC: Open-endedness via Models of human Notions of Interestingness with Environments Programmed in Code. arXiv preprint arXiv:2405.15568.\", \"questions\": [\"How does the performance of the data generation agents change over longer iterations? The paper truncates experiments when performance increases, but it would be insightful to explore whether performance plateaus or continuously increase over extended training.\", \"Is the total training data allocation fixed in each environment, or does it vary dynamically? The methodology mentions rebalancing but lacks clarity on how these allocations adjust adaptively based on feedback.\"], \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"rating\": \"6\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you so much for looking into my feedback and working on it. I am in the process of reviewing the updated manuscript and will let you know but so far you have pretty much addressed my concerns. Cheers!\"}", "{\"title\": \"Response to Reviewer VQ9Y (Part 1/2)\", \"comment\": \"Thank you for the quality feedback and for noticing our contributions to open-source infrastructure\\\\!\\n\\n**W1: We have added a new Figure 6 in Appendix B (L864-884), guiding the reader through a concrete task example.** \\nThe figure walks a reader through a round of data generation for the multimodal task using GQA as an example.\\n\\n**W2-1: The data generation engine is swappable and not required for all domains**. \\nThe data generation engine is only fixed for the multimodal setting, where it relies on an off-the-shelf T2I model to generate images. For the code generation and math settings, the data generation policy directly produces the data in an end-to-end manner. \\n\\n**W2-2: What strategies exist for modifying the data generation engine?** \\nOur framework easily allows updating the data generation agent (policy \\\\+ engine) when using an open source LLM. For example, we could update the parameters of the data generation policy and data generation engine using experiences from multiple rounds of data generation through reinforcement learning. \\n\\n**W3: How can the teacher take into account the weaknesses of the data generation engine or itself?** \\nWe have designed DataEnvGym as an RL-style setting so the policy can learn over subsequent iterations what the data generation engine\\u2019s capabilities are. Our position is that the capabilities of the data generation engine should be discovered by the policy through a process of experimentation. The Skill-Tree environment explicitly provides a mechanism for this. Our framework supports policy learning of what the teaching capabilities of the agent are. For example, after allocating data for a skill and observing a lack of improvement, the policy can infer that the data generation engine has trouble with generating data for the skill and avoid data generation for that skill in subsequent iterations.\\n\\n**W4: Experiments can be run for fewer than \\\\<$1 and under 10h on a single GPU.** \\nOn a single A6000, the total training time for our most computationally expensive setting (multimodal) is 6h, or about 1.5h/iteration. Most other settings are much faster. Environments are fully parallelizable using Ray and can be scaled up to multiple GPUs and even multiple nodes. \\n\\nWe\\u2019ve added a table showing the token and time costs (Appendix B.4, Table 4, L918-931), which we summarize below. Additionally, we conduct experiments with a cheaper teacher, GPT-4o-mini, showing that it can be used as a cheaper alternative for GPT-4o. We\\u2019ve added these results in Appendix B.4 (Table 5, L972-986) of the revised PDF.\\n\\n| Domain | Environment | Num Tokens | \\\\$ Cost (GPT-4o-mini) | \\\\$ Cost (GPT-4o) | GPU Minutes / Iteration | \\n|---|---|---|---|---|---| \\n| Math | Open-Ended | 173234 | 0.10 | 1.73 | 24 | \\n| Math | Skill-List | 318528 | 0.19 | 3.19 | 24 | \\n| Math | Skill-Tree | 355033 | 0.21 | 3.55 | 16 | \\n| Coding | Open-Ended | 279304 | 0.17 | 2.79 | 16 | \\n| Coding | Skill-List | 497787 | 0.30 | 4.98 | 16 | \\n| Coding | Skill-Tree | 967610 | 0.58 | 9.68 | 16 | \\n| Multimodal | Open-Ended | 25073 | 0.02 | 0.25 | 37 | \\n| Multimodal | Skill-List | 82419 | 0.05 | 0.82 | 134 | \\n| Multimodal | Skill-Tree | 33991 | 0.02 | 0.34 | 78 |\"}", "{\"comment\": \"Thank you, reviewer rVo8! We sincerely appreciate your thoughtfulness and engagement.\"}", "{\"comment\": \"Dear authors,\\n\\nThank you for conducting the additional experiments and incorporating the results and findings. This addresses three of my major concerns:\", \"w1\": \"Figure 12 demonstrates that students' performance improves when evaluated on test sets which incorporates the newly generated data points. Although the evaluation was only conducted on the multimodal and MATH environments, and not on the coding environment due to technical difficulties, I believe this set of experiments is well-designed and sound.\", \"q1\": \"Appendix E (Table 6) proves that the student performance is increased due to the added data rather than insufficient training initially. The result is valid and sound.\", \"q2\": \"The added figures (Figure 10), combined with the existing ones, provide a good coverage of both qualitative and quantitative results for skill discovery.\\n\\nTaking these into account, I have raised the score to 8.\"}", "{\"comment\": \"I thank the authors for the new experiments and clarifications.\\n\\n> This is not a typo \\u2014 we truncate when performance begins to saturate. This is a choice we made to speed up experiments, but it is certainly possible to run environments for longer.\\n\\nThat does not mean that the performance decreases. Decreases mean that the accuracy is dropping. Also, it is not clear in Figure 5 if the performance increase saturated.\\n\\n> We repeated the open-ended experiments 3x for each domain. The open-ended environment is the least constrained so we expect the highest variance here. The overall improvement is higher than the variance in each case.\\n\\nWhy not include it in Figure 5?\\n\\n> It differs by environment. In the MATH and LiveCodeBench environments, the performance saturates with increased training. In the GQA environment, the performance seems to continue increasing up to 56%, but becomes more unstable (fluctuations up and down).\\n\\nGiven this, why not include the full training progression in Figure 5 instead of truncating it? Providing more clarification on the decision to truncate would be helpful. Alternatively, adding an indicator on the figure to show where the truncation occurred and illustrating what the longer training progression would look like could address this.\"}", "{\"summary\": \"This paper presents a modular system for automated data generation, designed to minimize the need for human annotations. The proposed approach employs a reinforcement learning-inspired methodology, decomposing the process into a sequence of action predictions (data generation policy) based on state information (feedback from model errors) in an iterative manner. The effectiveness of this approach is demonstrated through three diverse tasks, encompassing text, image, and code generation across different modalities.\", \"soundness\": \"4\", \"presentation\": \"3\", \"contribution\": \"3\", \"strengths\": \"This paper presents a novel and insightful perspective on the autonomous data generation problem, leveraging principles from reinforcement learning to conceptualize it as a sequential decision-making process. The authors provide a thorough explanation of this approach, the motivations behind and the underlying mechanics.\\n\\nThis paper proposed a modular framework/testbed that can be easily adapted to various tasks, showcasing its versatility and potential for widespread applicability. The authors demonstrate the effectiveness of their approach through experiments on 3 tasks of multiple modalities, including text, image, and code generation, yielding promising early results.\", \"weaknesses\": \"The experiment part should be conducted more thoroughly: specifically, creating a test set that incorporates newly generated data points from the data generation agent and reporting evaluation results for each retrained model over successive iterations would provide more comprehensive insights into the system's performance.\", \"questions\": \"In the Experiments section, the authors mention that the baseline student model should not have been heavily post-trained so that there are rooms for further improvements. However, it would be beneficial to provide additional evidence and details to support the claim that the student's performance is improved due to the added data points rather than insufficient training. For instance, the training protocol involved a fixed 10-epoch training period; it remains unclear whether the model had reached convergence within this timeframe or if the introduction of new data points accelerated convergence. Further clarification on this aspect would enhance the overall validity of the results.\\n\\nAlso the result would be more sound if more quantitative and qualitative results for skill discovery is reported in this paper.\", \"flag_for_ethics_review\": \"['No ethics review needed.']\", \"details_of_ethics_concerns\": \"N/A\", \"rating\": \"8\", \"confidence\": \"3\", \"code_of_conduct\": \"Yes\"}", "{\"comment\": \"Thank you for the additional experiments and explanation. I have updated my score accordingly.\"}", "{\"title\": \"Follow up to reviewer c5nB\", \"comment\": \"Given that there is only one day remaining in the rebuttal period, **we wanted to gently check in whether our rebuttal addressed all your questions or we are happy to address any remaining questions.** We\\u2019ve added experiments to address your questions about (a) test sets that incorporate generated data (b) whether added data or training is responsible for performance increases and we also add more qualitative results on skill discovery. We hope that these additional results and answers will allow you to revisit your score \\u2014 otherwise, we are happy to engage further!\"}", "{\"comment\": \"Thank you once again for your valuable feedback! We hope our response has addressed all of your questions and will allow you to revisit your score. We would be happy to engage further and address any further questions you might have in the remaining few days of the discussion period.\"}", "{\"title\": \"Response to Reviewer c5nB\", \"comment\": \"We\\u2019re glad you find DataEnvGym novel and insightful\\\\!\\n\\n**W1: The student improves on generated test sets over successive iterations.** \\nFollowing your suggestion, we conducted experiments with generated test sets. We summarize the results/findings below and have added them to Appendix D (L1173-1182) and Figure 12 (L1188,1199) in our revised PDF. \\nFor each setting, we show the performance of the student on test sets that incorporate newly generated data points over successive iterations. \\nConcretely, we evaluate the performance of a student from iteration n on a test set created from data generated in iteration n+1 (unseen training data). \\nThis is only easily possible in the multimodal and MATH environments \\u2014 the coding environment accuracy is determined by unit tests, which we do not currently generate. \\nIn all cases, the student improves on the generated test sets over successive iterations, and accuracy on the generated test set is higher in the last iteration than in the first.\\n\\n| Iteration | Accuracy (Generated Math Data) | Accuracy (Generated Multimodal Data) | \\n|---|----|-----| \\n| 0 | 29.25 | 45.52 | \\n| 1 | 21.18 | 54.71 | \\n| 2 | 29.41 | 53.85 | \\n| 3 | 41.56 | 60.09 | \\n| 4 | 41.03 | 57.66 | \\n| 5 | 57.53 | N/A | \\n| 6 | 46.15 | N/A | \\n| 7 | 50 | N/A | \\n| 8 | 65.22 | N/A | \\n| 9 | 67.06 | N/A |\\n\\nNote that the multimodal environments were only run for half the iterations of the mathematics environments.\\n\\n**Q1: The students' performance increases due to added data points rather than insufficient training.** \\nTo substantiate the claim that student performance is increased due to added data points rather than insufficient training, we take a subset of the data and increase the number of epochs such that the student receives a fraction of the added data, but an equivalent number of epochs as training on the full data. \\nFor example, if a student is normally trained for 10 epochs with 1000 generated training data, we take the data from the first data generation iteration (let\\u2019s say it contains 200 training data) and train an alternative student for $\\\\\\\\frac{1000}{200}\\\\\\\\times10=50$ epochs to isolate the effect of the generated training data vs the added training epochs. \\nIn each case, training with less data but for more epochs produces significantly smaller improvements than training with more data for fewer epochs, showing that *data* is responsible for increased performance rather than more training. \\nIn fact, extending training without additional data typically hurts performance \\u2014 fresh data is essential. This highlights the importance of studying data generation as we do in our paper, as data generation is one of the few ways to get fresh data. \\nWe have added these results in Appendix E (Table 6\\\\) in L1184-1223 in the revised PDF.\\n\\n| | Data | Epochs | Accuracy (GQA) | \\n|----|---|---|---| \\n| Before Teaching | \\\\- | \\\\- | 44.18 | \\n| Less Data / Longer Training | 20% | 15 | 42.79 | \\n| More Data / Standard Training | 100% | 3 | **47.9** |\\n\\n| | Data | Epochs | Accuracy (MATH) | \\n|----|---|---|---| \\n| Before Teaching | \\\\- | \\\\- | 15.78 | \\n| Less Data / Longer Training | 10% | 30 | 13.98 | \\n| More Data / Standard Training | 100% | 3 | **23.44** |\\n\\n| | Data | Epochs | Accuracy (LiveCodeBench) | \\n|----|---|---|---| \\n| Before Teaching | \\\\- | \\\\- | 16.5 | \\n| Less Data / Longer Training | 20% | 15 | 15 | \\n| More Data / Standard Training | 100% | 3 | **18.91** |\\n\\n**Q2: We add more qualitative results for skill discovery.** \\nFollowing your suggestion, we have added an additional figure showing a full list (Figure 10, L1115-1132 in the revised PDF) of discovered skills for MATH, GQA, and LiveCodeBench in the SKILL-LIST environments in Appendix C. We have also added another figure showing qualitative examples of skill-errors that were fixed by training on synthetic data in Appendix C, highlighting the utility of skills in our framework. In summary, we now have 5 figures showing qualitative examples of skill discovery and one quantitative analysis of skill discovery in Appendix C.\"}", "{\"title\": \"Thank you\", \"comment\": \"Thanks for the tremedous effort put into the response. It addressed most of my concerns so I raise the score to 8.\"}", "{\"summary\": \"This paper introduces DataEnvGym, a novel testbed of teacher environments for developing data generation agents that iteratively improve student models by generating targeted training data. DataEnvGym frames data generation as a sequential decision-making task where an agent, comprising a data generation policy and engine, interacts with an environment that provides feedback from a student model. The agent's goal is to improve student model performance by generating training data based on student feedback (errors or weak skills). DataEnvGym offers multiple instantiations of teacher environments across three levels of structure: open-ended, skill-list, and skill-tree, each with varying levels of scaffolding support. Experiments across text and image-based tasks (mathematics, programming, and visual question answering) demonstrate that example agents within DataEnvGym can iteratively improve student model performance. Furthermore, the authors analyze the impact of state information, environment structure, and skill discovery quality on agent performance and student learning. The paper concludes that DataEnvGym, with its modular design and support for diverse tasks and student models, provides a valuable platform for developing and evaluating data generation agents, engines, and feedback mechanisms for automated model improvement. The code and leaderboard will be publicly released.\", \"soundness\": \"3\", \"presentation\": \"2\", \"contribution\": \"3\", \"strengths\": \"Novel Problem: Automating data generation to improve models is a significant challenge with practical applications. This work directly addresses this problem with a novel approach.\", \"well_defined_framework\": \"DataEnvGym is presented as a well-defined framework with clear components (trainer, evaluator, data generation policy, data generation engine) and different levels of structure (open-ended, skill-list, skill-tree). This structure makes the problem tractable and facilitates modular development and testing.\", \"multiple_tasks_and_domains\": \"The inclusion of experiments across diverse tasks (mathematics, programming, visual question answering) and with different student models demonstrates the generalizability of the framework.\", \"promising_results\": \"The initial results showing improved student model performance across tasks and environments are encouraging and suggest the potential of this approach. The analysis of difficulty/rarity and training dynamics adds value.\", \"open_source_release\": \"The commitment to publicly releasing the code and leaderboard promotes reproducibility and encourages further research in this area.\", \"weaknesses\": \"Limited Evaluation of Agent Architectures: The focus is primarily on the environment itself, with less emphasis on the architecture and training of the data generation agents. While baseline agents are provided, more sophisticated agent designs (e.g., reinforcement learning agents, agents leveraging larger language models) and their systematic evaluation would significantly strengthen the paper. How do different agent architectures compare in terms of effectiveness and efficiency? Are there specific architectural choices that are particularly well-suited for this task?\", \"over_reliance_on_llms_for_data_generation\": \"While using LLMs for data generation is a reasonable starting point, it raises concerns about the quality and diversity of the generated data. Exploring alternative data generation methods (e.g., data augmentation techniques, programmatic data generation) and comparing their effectiveness with LLM-based generation would be valuable. How robust is the framework to the quality of the generated data?\", \"limited_analysis_of_skill_discovery_quality\": \"The paper briefly discusses the impact of oracle skills on student performance but doesn't delve deeply into the quality of the skills discovered by the proposed LLM-based method. A more thorough analysis is needed to understand the strengths and limitations of the skill discovery module. This could involve quantitative measures of skill quality, such as measuring their coherence, coverage, and relevance to the target task, or qualitative analysis by human experts. Investigating how the quality of the discovered skills affects the performance of the data generation agents and the resulting student models would strengthen the paper's contribution. Exploring alternative skill discovery methods (e.g., clustering-based approaches, topic modeling) and comparing their effectiveness with the proposed method would further enhance the analysis.\", \"lack_of_comparison_with_existing_methods\": \"The paper positions DataEnvGym as a novel approach for model improvement, but it lacks a direct comparison with existing methods like curriculum learning (Bengio et al., 2009) or active learning (Settles, 2009). Evaluating how DataEnvGym compares to these established techniques in terms of student model performance, data efficiency, and computational cost would provide valuable context and highlight the advantages of the proposed framework. This would also clarify the specific niche and contribution of DataEnvGym within the broader landscape of model improvement techniques.\", \"limited_discussion_of_scalability\": \"The experiments in the paper are conducted with relatively small datasets and models. It's essential to address the scalability of DataEnvGym to more realistic scenarios involving larger datasets, more complex models, and a broader range of skills. Discussing the computational challenges and potential optimizations for scaling the framework to more demanding settings would strengthen the paper's practical relevance. For instance, how can the computational cost of LLM-based data generation be reduced while maintaining data quality? How can the skill discovery and agent training processes be optimized for larger datasets? Addressing these questions would provide valuable insights for future research and practical applications.\", \"questions\": \"Limited Evaluation of Agent Architectures: The paper primarily focuses on introducing the DataEnvGym environment, but the evaluation of data generation agents is limited to relatively simple baseline policies. Exploring more sophisticated agent architectures, such as reinforcement learning agents (e.g., using policy gradient methods, Q-learning) or agents incorporating larger language models for planning and decision-making (similar to the approaches used in Shimabucoro et al. (2024), would substantially strengthen the paper. A systematic comparison of different agent architectures in terms of their effectiveness in improving student models, their sample efficiency, and their computational cost would provide valuable insights and contribute to a better understanding of the challenges and opportunities in automated data generation.\", \"flag_for_ethics_review\": \"['Yes, Discrimination / bias / fairness concerns', 'Yes, Potentially harmful insights, methodologies and applications']\", \"rating\": \"8\", \"confidence\": \"4\", \"code_of_conduct\": \"Yes\"}", "{\"title\": \"Response to Reviewer wuGW\", \"comment\": \"Thank you for recognizing the value of DataEnvGym and pointing out the \\u201c_novel problem_\\u201d we address as well as our \\u201c_well-defined framework_\\u201d and \\u201c_promising results_\\u201d.\\n\\n**W1/Q1: We include multiple agent architectures and experiment with two additional teacher LLMs**. \\nEach environment requires a different agent architecture, so we have 3 in total. We also try with different teacher LLMs: GPT-4o (Table 2, L378-393) and GPT-4o-mini (Table 5, L972-986 in the revised PDF). \\n\\n**W2: Data is generated by several components working together.** \\nIn addition to generating data via LLM, we experiment with multimodal grounding datasets. \\nIn these cases, the data is generated by a text-to-image model. In all cases, the LLM is only a component of a pipeline that involves many modules, such as skill discovery and a data generation engine. For example, in the SKILL-TREE environment, the policy making decisions about what skills to generate data for are not LLMs and can be classical controllers.\\n\\n**W3/Q2: We have added additional analysis of the learned skills.** \\nFollowing your suggestion, we have added an additional figure showing a full list (Figure 10, L1115-1132 in the revised PDF) of discovered skills for MATH, GQA, and LiveCodeBench in the SKILL-LIST environments in Appendix C. We have also added another figure showing qualitative examples of skill-errors that were fixed by training on synthetic data in Appendix C, highlighting the utility of skills in our framework. In summary, we now have 5 figures showing qualitative examples of skill discovery and one quantitative analysis of skill discovery in Appendix C.\\n\\n**W4/Q3: We compare with active learning.** \\nWe implement data selection using prototypicality scores \\\\[A\\\\] which are standard for active learning. Similar to the random selection baseline, it is hard to improve a well-post-trained LLM like Llama3 or Gemma2 by using readily available data pools \\u2014 it is much easier to improve them using generated data. Even using the full training dataset cannot improve the student. This motivates our choice to tackle data generation rather than data selection. The training of open-source frontier models (Llama3, for example) includes significant post-training that subsumes publicly available data sources \\\\[B, $\\\\\\\\S$4.2, C$\\\\\\\\S$4\\\\], making it hard to improve them with any amount of already existing data. \\n\\n| | Before Training | Data Selection (Prototypicality) | Full Training Dataset | Data Generation (Open-Ended) | \\n|---|---|-----|---|----| \\n| MATH Accuracy | 15.78 | 16.01 | 15.18 | **23.44** |\\n\\n**W5/Q4: DataEnvGym has been designed for scalability.** \\nOn a single A6000, the total training time for our most computationally expensive setting (multimodal) is 6h, or about 1.5h/iteration. Most other settings are much faster. Environments are fully parallelizable using Ray and can be scaled up to multiple GPUs and even multiple nodes. We\\u2019ve added a full accounting of token and GPU costs in Table 4, L918-931 of the revised PDF.\\n\\n\\\\[A\\\\] Sorscher et al., Beyond neural scaling laws: beating power law scaling via data pruning, NeurIPS 2022 Outstanding Paper Award \\n\\\\[B\\\\] Llama Team, AI @ Meta, The Llama 3 Herd of Models, arXiv 2024 \\n\\\\[C\\\\] Gemma Team, Google Deepmind, Gemma 2: Improving Open Language Models at a Practical Size, arXiv 2024\"}", "{\"title\": \"General Response\", \"comment\": \"Reviewers believe we tackle \\u201c*a timely and interesting problem*\\u201d (VQ9Y) with a \\u201c*novel and insightful perspective on the autonomous data generation problem*\\u201d (c5nB), making a \\u201c*good contribution to automated data generation for model improvement*\\u201d (rVo8).\", \"the_potential_impact_of_our_work_in_making_a_challenging_problem_accessible_is_noted_by_several_reviewers\": \"\\u201c*necessary infrastructure for the community to study the problem*\\u201d (VQ9Y), that our \\u201c*structure makes the problem tractable*\\u201d (wuGW) and has \\u201c*potential for widespread applicability*\\u201d (c5nB).\\n\\nWe show that experiments can be run in half a day with limited compute resources (1x A6000) for under $1 of OpenAI API credits, making it an accessible testbed for developing data generation agents. \\n\\n**We thank all reviewers for their valuable feedback and suggestions**. We have provided responses to all of the reviewer questions in the rebuttals in the individual responses and the revised PDF (updated text is in blue).\"}", "{\"comment\": \"Thank you Reviewer wuGW for your feedback/engagement and positive appraisal of our work! We're glad our rebuttal was able to address your questions.\"}" ] }
y518qXG5Cb
Efficient Generation of Diverse Scientific Hypotheses through Stepwise Conceptual Concretization
[ "Yatima Kagurazaka", "Keita Nishimoto", "Shiro Takagi", "Kimitaka Asatani", "Ichiro Sakata" ]
In recent years, the automation of research using LLMs has been advancing rapidly. While The AI Scientist can generate papers that meet the acceptance criteria of top conferences in the machine learning field under specific conditions, there are limitations to the innovativeness of the generated research. As a step toward improving quality, this study aims to develop a method that generates scientific hypotheses of equivalent quality with significantly fewer tokens. The proposed method, which generates hypotheses more than ten times more efficiently, was compared with previous research in terms of novelty, importance, clarity, feasibility, and validity of the generated hypotheses. While no clear differences were observed in novelty and feasibility, improvements in performance were recognized in terms of importance, clarity, and validity compared to previous research.
[ "Hypothesis Generation", "Large Language Models", "AI for Science" ]
Reject
https://openreview.net/pdf?id=y518qXG5Cb
https://openreview.net/forum?id=y518qXG5Cb
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "tmoCwVyfAu", "UYhUA03OxQ", "Lso7vgVBrx" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1741145913398, 1741148113489, 1741041615101 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission27/Reviewer_4nyi" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission27/Reviewer_5Sxu" ] ], "structured_content_str": [ "{\"title\": \"Official Review\", \"review\": \"This paper introduces a method called Tree of Generation, designed to efficiently generate diverse scientific hypotheses using fewer computational resources than existing AI-driven research automation methods like The AI Scientist. Instead of generating hypotheses one at a time, the proposed method structures the generation process in multiple steps, progressively refining ideas through conceptual concretization. This approach significantly reduces token usage while maintaining or improving hypothesis significance, clarity, and validity. The method is evaluated using LLM-based comparative analysis, showing that it produces hypotheses of similar novelty and feasibility but with higher clarity and effectiveness compared to previous approaches.\", \"strengths_of_the_paper\": \"1. The proposed approach produces more refined and structured hypotheses by progressively concretizing concepts.\\n2. The proposed approach can be applied across multiple scientific fields, increasing its versatility.\", \"weaknesses_of_the_paper\": \"1. Evaluation is not human-verified, raising concerns about real-world applicability.\\n2. No major improvement in novelty or feasibility, limiting its impact on groundbreaking discoveries.\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"An effective method, but with incremental innovation and possibly limited applicability\", \"review\": \"This paper proposes the Tree of Generation, a stepwise method to efficiently generate diverse scientific hypotheses while reducing computational costs. By progressively refining ideas through multiple stages, it improves significance, clarity, and effectiveness compared to previous method The AI Scientist, while maintaining similar novelty and feasibility.\", \"strengths\": \"1. This paper proposes a simple and easy-to-use method to generate scientific hypotheses. Empirical results verified the quality of the generated hypotheses.\\n\\n2. The proposed method requires significantly fewer input tokens for hypotheses generation.\", \"weaknesses\": \"1. [Motivation] Through out this paper, neither is it clearly discussed why it is needed to use fewer tokens, nor is it supported by empirical results that fewer tokens will bring significant benefits.\\n\\n2. [Effectiveness] This paper claims to increase \\\"diversity\\\" of generated hypotheses. However, all related analysis is based on the functionality of the model itself, without comparing it to state-of-the-art model. Besides, all analysis is intuitive, except for Table 1, which cannot provides convincing empirical evidence.\\n\\n3. [Limited Applicability] The way that the proposed model works is to generate multiple hypotheses according to a pre-defined three-step template. However, it is not discussed whether this template can cover the majority of possible hypotheses generation problems. Without such a template, the LLMs themselves are able to process different hypotheses generation problems. This feature of the method may lead to limited applicability.\\n\\n4. [Clearity] In Figure 2, the number of tokens needed for different models are compared. It is not clear defined how the \\\"number of generated hypotheses\\\" is counted. It would be unfair if the proposed model generates 10 hypotheses in a batch using the same command while the compared baseline generate 10 different hypotheses using different commands.\\n\\n5. [Typo?] In this paper, \\\"section 4.2\\\" is mentioned twice, while there is no such section. I reckon this may be a typo.\", \"rating\": \"5\", \"confidence\": \"4\"}" ] }
xGuAWQNJCW
MOOSE-Chem: Large Language Models for Rediscovering Unseen Chemistry Scientific Hypotheses
[ "Zonglin Yang", "Wanhao Liu", "Ben Gao", "Tong Xie", "Yuqiang Li", "Wanli Ouyang", "Soujanya Poria", "Erik Cambria", "Dongzhan Zhou" ]
Scientific discovery contributes largely to the prosperity of human society, and recent progress shows that LLMs could potentially catalyst the process. However, it is still unclear whether LLMs can discover novel and valid hypotheses in chemistry. In this work, we investigate this main research question: whether LLMs can automatically discover novel and valid chemistry research hypotheses, given only a research question? With extensive discussions with chemistry experts, we adopt the assumption that a majority of chemistry hypotheses can be resulted from a research background question and several inspirations. With this key insight, we break the main question into three smaller fundamental questions. In brief, they are: (1) given a background question, whether LLMs can retrieve good inspirations; (2) with background and inspirations, whether LLMs can lead to hypothesis; and (3) whether LLMs can identify good hypotheses to rank them higher. To investigate these questions, we construct a benchmark consisting of 51 chemistry papers published in Nature or a similar level in 2024 (all papers are only available online since 2024). Every paper is divided by chemistry PhD students into three components: background, inspirations, and hypothesis. The goal is to rediscover the hypothesis given only the background and a large chemistry literature corpus consisting the ground truth inspiration papers, with LLMs trained with data up to 2023. We also develop an LLM-based multi-agent framework that leverages the assumption, consisting of three stages reflecting the more smaller questions. The proposed method can rediscover many hypotheses with very high similarity with the ground truth ones, covering the main innovations.
[ "scientific discovery; LLM agents" ]
Accept (Oral)
https://openreview.net/pdf?id=xGuAWQNJCW
https://openreview.net/forum?id=xGuAWQNJCW
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "wmsMu38Bpo", "rjdbfAt6C9" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1741147303364, 1741144790431 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission8/Reviewer_xpgf" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review for MOOSE-Chem\", \"review\": \"The paper is accepted as an invited talk.\", \"rating\": \"10\", \"confidence\": \"5\"}" ] }
tW8HpTwZ0T
Orchestrating Tool Ecosystem of Drug Discovery with Intention-Aware LLM Agents
[ "Mingyu Derek Ma", "Karina Zadorozhny", "Jesse Swanson", "Nathan C. Frey", "Keunwoo Choi", "Maksim Eremeev", "Sabrina J Mielke", "Wenmo Sun", "Melody Liu", "Jonathan Wickes", "Vladimir Gligorijevic", "Richard Bonneau", "Henri Dwyer", "Kyunghyun Cho", "Stephen Ra" ]
Fragmented tools and models and complex decision-making with incomplete and heterogeneous information often hinder the drug discovery process. Large Language Models offer promising capabilities in commonsense reasoning and tool integration, yet their application in drug discovery remains constrained by challenges such as being incapable of handling large tool space, limited planning capabilities based on scientific intentions, and unscalable evaluation. We introduce GenieAgent, a drug discovery agent that integrates a wide range of molecule design models and bridges the user intentions to concrete actions by navigating the large skill ecosystem. By unifying disparate tools under a single natural language interface, GenieAgent enables cross-tool reasoning and supports complex scientific workflows. We also propose an evaluation framework simulating drug discovery conversations, based on real-world experiments. A large-scale assessment, validated by expert annotations, demonstrates that GenieAgent reliably meets the majority of molecular engineers' needs with high scientific accuracy and robustness.
[ "LLM agents", "drug discovery" ]
Accept (Poster)
https://openreview.net/pdf?id=tW8HpTwZ0T
https://openreview.net/forum?id=tW8HpTwZ0T
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "UUd8SlhYhc", "LHeAxqwBLi", "0P0aO2w5mk" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1741073830777, 1741147138550, 1741148248381 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission34/Reviewer_1GAi" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission34/Reviewer_5YHq" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Measure the agreement between human annotators and the LLM judge on three turn-level quality metrics\", \"review\": \"**Paper Summary**\\n\\nThis paper introduces GenieAgent, an innovative drug discovery agent characterized by three core components. Firstly, it retrieves reference intention-action pairs to guide Large Language Models (LLMs). Secondly, it integrates metadata-indexed searching tools and specialized agents for capability-focused subtasks, allowing for a division between high-level planning and low-level subtask execution. Lastly, it incorporates hint nodes that activate LLMs for predefined follow-up subtasks under specific conditions, thereby achieving a balance between workflow controllability and flexibility. Additionally, the paper proposes an automatic agent evaluation framework, which includes test samples, an evaluation agent, and an LLM judge. Test samples are derived from real-world drug discovery experiments, with user intentions generated by models. The evaluation agent simulates a drug discovery scientist requesting GenieAgent to resolve the intended objectives, while the LLM judge evaluates turn-level intermediate quality across three quality dimensions. These assessments, combined with the overall success rate, serve as metrics for evaluating drug discovery agents. A series of experiments validate the effectiveness of the agent framework and its individual components.\\n\\n**Strengths**\\n\\nThe design of the GenieAgent system is meticulous, particularly with its hint nodes mechanism, which effectively balances workflow controllability and flexibility. The evaluation framework is impressive, as it mirrors human scientific thinking by considering a spectrum of user intentions from vague to clear. Experiments further substantiate the effectiveness of GenieAgent, demonstrating its state-of-the-art performance.\\n\\n**Weaknesses**\\n\\nIt would be beneficial to measure the agreement between human annotators and the LLM judge on three turn-level quality metrics using Cohen\\u2019s kappa[1] or other methods.\\n\\nFurthermore, conducting an error analysis of the GENIEAGENT would be beneficial, as it could identify existing challenges and outline directions for future research.\\n\\n\\n\\n [1] McHugh, Mary L. \\\"Interrater reliability: the kappa statistic.\\\" Biochemia medica 22.3 (2012): 276-282.\", \"rating\": \"9\", \"confidence\": \"4\"}", "{\"title\": \"Official Review\", \"review\": \"The paper introduces GENIEAGENT, an intention-aware AI agent designed to orchestrate drug discovery tools using Large Language Models (LLMs). The drug discovery process relies on various computational models for molecular design, scoring, and ranking, but existing approaches lack seamless integration and adaptability to user intentions. GENIEAGENT addresses these issues by providing a unified natural language interface that bridges scientific intentions to concrete tool usage, enabling cross-tool reasoning and dynamic workflow orchestration. The system incorporates specialized agents, indexed search utilities, and hint nodes to balance structured guidance with open-ended exploration. A novel evaluation framework simulates scientific discovery conversations, and results show that GENIEAGENT significantly outperforms baseline AI agents, achieving higher scientific accuracy and robustness when assisting molecular engineers.\", \"suggested_improvements\": \"1. The authors can introduce domain-agnostic agent modules to improve generalization beyond drug design.\\n2. The authors can add explainability features for scientific decision-making, allowing users to audit the AI\\u2019s reasoning process.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}" ] }
o00rMAu8sj
A Simplified a priori Theory of Meaning; Nature Based AI 'First Principles'
[ "Marcus Abundis" ]
This paper names structural fundaments in ‘information’, to cover an issue seen by Claude Shannon and Warren Weaver as a missing “theory of meaning”. First, varied informatic roles are noted as likely elements for a general theory of mean- ing. Next, Shannon Signal Entropy as a likely “mother of all models” is decon- structed to note the signal literacy (logarithmic Subject-Object primitives) innate to ‘scientific’ views of information. It therein marks GENERAL intelligence ‘first principles’ and a dualist-triune (2-3) pattern. Lastly, it notes ‘intelligence building’ as named contexts wherein one details meaningful content—rendered via material trial-and-error—that we later extend abstractly. This paper thus tops today’s vague sense of Open World ‘agent intelligence’ in artificial intelligence, framed herein as a multi-level Entropic/informatic continuum of ‘functional degrees of freedom’; all as a mildly-modified view of Signal Entropy. —Related video found at: $\href{https://youtu.be/11oFq6g3Njs?si=VIRcV9H3GNJEYzXt}{The Advent of Super-Intelligence}$.
[ "information", "information theory", "semantics", "meaning", "entropy", "intelligence", "general intelligence", "Shannon", "nature", "open world", "cosmos" ]
Accept (Poster)
https://openreview.net/pdf?id=o00rMAu8sj
https://openreview.net/forum?id=o00rMAu8sj
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "oKldlmgZUh", "kpPLBX43On", "P2fUUqr7E2", "2NrU1KEadP" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741147269517, 1741066161175, 1741145218424, 1741040157338 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission4/Reviewer_4C38" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission4/Reviewer_YQEj" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission4/Reviewer_wTqX" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"A Simplified a priori Theory of Meaning; Nature Based AI 'First Principles'\", \"review\": \"### **Strength**:\\n\\n1) **Strong Motivation**: The paper tackles an important problem in information theory and AI, namely the absence of a clear theory of meaning. It refers to a framework that explains how language, symbols, or information carries semantic content - essentially, how meaning is created, represented, and understood. The authors are trying to develop a framework that explains how meaning emerges in both natural and artificial systems, which they argue is essential for advancing general artificial intelligence beyond purely statistical approaches.\\n\\n2) **Interdisciplinary approach**: The paper draws from information theory, cognitive science, and philosophy attempting to synthesize concepts across disciplines.\\n\\n### **Weakness:**\\n\\n1) **Lack of clear research question**: The paper does not explicitly define a central research question or hypothesis. Instead, it discusses various broad and abstract ideas related to information theory and intelligence without a focused inquiry. Despite proposing a theoretical framework, the paper lacks formalism or precise definitions that would make its claims testable or falsifiable.\\n\\n2) **Complex writing and paper structure**: The writing style is unnecessarily complex and philosophical, employing convoluted sentences that impede comprehension rather than facilitating it. Many key concepts, arguments, and claims remain vaguely defined. The paper's structure is challenging to follow, with concepts introduced before being fully explained and circular references between ideas. The writing style is dense and often obscures rather than clarifies the underlying concepts. Many paragraphs contain multiple complex ideas without sufficient development of each.\\n\\n3) **Lack of clear methodology section**: The paper lacks a clear methodological section. It presents theoretical constructs without explaining how they were derived or how they could be validated. For a theory paper, there should be clearer explanation of the analytical methods used to develop the framework.\\n\\n4) **Limited empirical support**: The limited examples provided (particularly regarding the Periodic Table and Standard Model) are interesting but insufficiently developed to demonstrate the framework's utility. The paper would benefit from detailed case studies showing how the theory addresses specific problems in AI or information theory.\\n\\n5) **Unclear practical implications**: While claiming to provide \\\"first principles\\\" for AI, the concrete applications and implications for AI system development remain underspecified.\\n\\n### **Summary**:\\nThe paper addresses an important problem and presents interesting ideas about the nature of meaning and information that could potentially inform AI development. However, in its current form, it lacks the clarity, precision, and supporting evidence needed for a significant contribution to the field\", \"rating\": \"3\", \"confidence\": \"3\"}", "{\"title\": \"Official Review\", \"review\": \"This paper introduces a framework for a theory of meaning based on information theory and entropy, addressing a gap identified by Claude Shannon and Warren Weaver. It argues that current views on information lack a structured way to define meaning and proposes a dualistic Subject-Object (S-O) model to bridge this gap. The author examines Shannon\\u2019s Signal Entropy and extends it into a multi-level framework that describes general intelligence, adaptive processes, and structured information. The paper suggests that meaning emerges from trial-and-error interactions with the physical world, leading to hierarchical patterns of intelligence. By integrating entropy, semiotics, and evolutionary processes, this approach seeks to provide a foundation for artificial and human intelligence, offering a general framework for understanding how information becomes meaningful.\", \"strengths_of_the_paper\": \"1. The paper addresses a fundamental gap in information theory by focusing on meaning.\\n2. The paper incorporates entropy and semiotics, linking physical and informational processes.\\n3. The paper provides a cross-disciplinary perspective, unifying theories from physics, biology, and AI.\\n4. Suggests a testable framework for building intelligent systems with meaningful interactions.\", \"weaknesses_of_the_paper\": \"1. Lack of empirical validation, relying mostly on theoretical arguments.\\n2. Definitions of meaning and intelligence remain broad, requiring further refinement.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"A Simplified a priori Theory of Meaning; Nature Based AI 'First Principles'\", \"review\": \"# 1. Summary\\n\\nThis paper attempts to complete Shannon\\u2019s information theory by distinguishing between the subjective (S) and objective (O) aspects of information. It uses S-O modeling and Signal Entropy to explain how meaning is created from data, in order to found a general theory of intelligence.\\n\\n # 2. Strengths\\n\\n 1. **Novel Insight:**\\n The paper presents a new perspective by combining information theory with concepts from cognitive science.\\n2. **Interdisciplinary:**\\n It builds on the work of several disciplines to create a unified framework.\\n3. **Conceptually Rich:**\\n It presents fairly conceptual and very interesting ideas on the production of meaning.\\n\\n # 3. Weakness\\n1. **Dense Language:** \\n The manuscript is filled with many technical terms that make it hard to understand, especially for readers not familiar with the field.\\n2. **Limited Practical Validation:** \\n The paper doesn\\u2019t include enough real-world examples or case studies, which makes it harder to see how the theory would work in practice.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
kwxyhs9HWn
ENHANCING DIVERSITY AND NOVELTY IN TEXT GENERATION VIA MULTI-VIEW EMBEDDINGS
[ "Arash Lagzian", "Srinivas Anumasa", "Dianbo Liu" ]
Large Language Models (LLMs) demonstrate remarkable proficiency in generating accurate and fluent text. However, they often struggle with diversity and novelty, leading to repetitive or overly deterministic responses. These limitations stem from constraints in training data, including gaps in specific knowledge domains, outdated information, and an over-reliance on textual sources. Such shortcomings reduce their effectiveness in tasks requiring creativity, multi-perspective reasoning, and exploratory thinking. To address this challenge, we introduce multi-view embeddings, a novel approach that enriches input prompts with diverse perspectives derived from both textual and visual sources. By incorporating additional contextual information, this method enhances the variety and creativity of generated outputs. Importantly, our approach is model-agnostic, requiring no architectural modifications and being compatible with both open-source and proprietary LLMs. Furthermore, we propose a comprehensive evaluation framework that simultaneously measures diversity, novelty, and correctness—a first-of-its-kind methodology for assessing these three crucial aspects of LLM-generated content. We evaluate our method and framework on over 469,000 generated outputs from various well-known LLMs, demonstrating significant improvements in output diversity and novelty while maintaining quality and relevance. Our approach provides a scalable and practical solution for enhancing LLM performance across a wide range of applications, including brainstorming, creative writing, and multiple-choice question generation.
[ "Diversity and Novelty", "Large Language Models", "Multiview Embeddings" ]
Accept (Poster)
https://openreview.net/pdf?id=kwxyhs9HWn
https://openreview.net/forum?id=kwxyhs9HWn
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "oMdhvk1ef8", "RMcFQWwYW2", "LOaTUYNyBF" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1741146432399, 1741059744945, 1741148342989 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission31/Reviewer_p1QY" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission31/Reviewer_zBNX" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Good Paper\", \"review\": \"The paper proposes a multi-view embedding approach to improve diversity and novelty in Large Language Model (LLM) text generation. Traditional LLMs often produce repetitive outputs due to their reliance on training data patterns. The multi-view embedding method addresses this by incorporating different perspectives from both textual and visual sources before generating responses. This approach is model-agnostic, meaning it can be used with any LLM without modifying its architecture. Additionally, the paper introduces a comprehensive evaluation framework that simultaneously measures diversity, novelty, and correctness in generated content. The study evaluates 469,000 generated responses, showing that multi-view embeddings significantly enhance response variety while maintaining relevance and accuracy, making this technique useful for tasks requiring creative and exploratory thinking.\", \"strengths_of_the_paper\": \"1. Improves diversity in LLM-generated text, reducing repetitive or overly deterministic responses.\\n2. Model-agnostic approach, making it compatible with different LLMs without changes.\\n3. Demonstrates significant improvements over standard prompting methods across multiple tasks.\", \"weaknesses_of_the_paper\": \"1. Limited to external context sources, which may not always be relevant or accurate.\\n2. No human evaluation of creativity, relying solely on automated metrics.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Reviews of Submission 31\", \"review\": [\"**Strengths**\", \"The paper introduces a model-agnostic method using multi-view embeddings (both text and image-based) that can be applied to various LLMs without architectural modifications, making it widely applicable across models.\", \"The study presents extensive experimental results across 469K generated outputs from various LLMs, demonstrating significant improvements.\", \"**Weaknesses**\", \"While the paper explains the process of using image views, it doesn't clearly describe how relevant images are initially selected for a given prompt or how the quality of these images impacts performance.\", \"The paper would benefit from more direct comparisons with other techniques that aim to improve diversity in text generation (such as decoding strategies like nucleus sampling).\", \"Some practical aspects of implementing the multi-view approach remain unclear, such as computational overhead, latency impacts, and how it would integrate with existing applications.\"], \"rating\": \"5\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}" ] }
e8JgXGeuqJ
CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models
[ "梁学辰", "Yangfan He", "Meiling Tao", "Yinghui Xia", "Yijin Wang", "Jianhui Wang", "Kun Li", "Jiayi Su", "TIANYU SHI", "Jun Wang", "Yang Jingsong" ]
Open large language models (LLMs) have significantly advanced the field of natural language processing, showcasing impressive performance across various tasks. Despite the significant advancements in LLMs, their effective operation still relies heavily on human input to accurately guide the dialogue flow, with agent tuning being a crucial optimization technique that involves human adjustments to the model for better response to such guidance. Addressing this dependency, our work introduces the TinyAgent model, trained on a meticulously curated high-quality dataset. We also present the Collaborative Multi-Agent Tuning (CMAT) framework, an innovative system designed to augment language agent capabilities through adaptive weight updates based on environmental feedback. This framework fosters collaborative learning and real-time adaptation among multiple intelligent agents, enhancing their context-awareness and long-term memory. In this research, we propose a new communication agent framework that integrates multi-agent systems with environmental feedback mechanisms, offering a scalable method to explore cooperative behaviors. Notably, our TinyAgent-7B model exhibits performance on par with GPT-3.5, despite having fewer parameters, signifying a substantial improvement in the efficiency and effectiveness of LLMs.
[ "Multi-Agent Collaboration", "Language Model Tuning", "Memory Adaptation" ]
Accept (Oral)
https://openreview.net/pdf?id=e8JgXGeuqJ
https://openreview.net/forum?id=e8JgXGeuqJ
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "ijtkrZQjOg", "gSxoLo2sR1", "NMYX9qxuYF", "L5Et2VAvE1" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1741114446287, 1740703366707, 1741143224392, 1740679128166 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission12/Reviewer_um3C" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission12/Reviewer_PeYb" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission12/Reviewer_bbzd" ] ], "structured_content_str": [ "{\"title\": \"CMAT: A Multi-Agent Collaboration Tuning Framework for Enhancing Small Language Models\", \"review\": \"The paper introduces CMAT, a multi-agent collaboration framework designed to enhance small language models by enabling adaptive learning through agent feedback and memory mechanisms. It presents TinyAgent-7B, which achieves performance comparable to GPT-3.5 despite its smaller size, demonstrating efficiency in database querying, OS interactions, and web-based tasks. However, its effectiveness depends on the base model and the quality of the prompts. An ablation study examining each agent's contribution and a detailed analysis of error propagation and mitigation strategies would further strengthen the work. Overall, the paper is well-written and makes a significant contribution toward improving SLMs, with future work needed to address scalability, prompt independence, and real-world applicability.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Good work\", \"review\": \"Summary:\\nThis paper proposes CMAT (Collaborative Multi-Agent Tuning), a framework designed to enhance small language models through multi-agent collaboration with specialized roles and environmental feedback mechanisms. The authors claim their TinyAgent-7B model can match GPT-3.5 performance despite having fewer parameters. Key contributions include: (1) a dynamic real-time memory update system, (2) a role-playing mechanism for task allocation, and (3) experimental evaluation across multiple agent tasks.\", \"strengths\": \"1. The paper is generally well-structured with clear explanations of the framework components.\\n2. The dual memory system approach (short-term and long-term) addresses a recognized limitation in current LLMs.\\n3. Experimental results across multiple domains (OS, DB, KG, ALF, WS, M2W) show performance improvements.\\n4. The ablation study effectively demonstrates the value of combining agent-specific and general instructions.\", \"weaknesses\": [\"1. Results and Comparisons:\", \"Some performance improvements vary significantly across tasks\", \"The comparison with existing agent frameworks could be more comprehensive\", \"Additional baselines would strengthen the evaluation of the framework's effectiveness\", \"2. Technical Contributions:\", \"The Actor-Critic methodology could benefit from more implementation details\", \"Clearer distinction between novel components and adaptations from prior work would help\", \"More analysis on how the various components interact would strengthen the paper\", \"Overall, it is a good paper, and I recommend accepting it.\"], \"rating\": \"8\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Farily Great Job\", \"review\": \"This paper introduces CMAT, a novel framework for enhancing small language models (SLMs) through multi-agent collaboration, and demonstrates its effectiveness via the TinyAgent model family. Below is a detailed review addressing the paper's strengths, weaknesses, and potential impact:\\n\\nSummary of Contributions\", \"the_authors_present_two_key_innovations\": \"1. TinyAgent Models: Compact LLMs (1.8B\\u20137B parameters) fine-tuned on Qwen and CodeLlama, achieving performance rivaling GPT-3.5 in tasks like SQL generation and OS interaction.\\n2. CMAT Framework: A multi-agent system with User, Assistant (Actor), and Checker (Critic) roles that enables dynamic weight updates through environmental feedback and memory mechanisms.\\n\\nThe framework integrates Chain-of-Thought reasoning, ReAct prompting, and supervised fine-tuning with LoRA/P-Tuning, showing a 58% improvement over base models in database tasks and 23% higher accuracy than AgentLM-7B in operating system interactions.\\n\\nStrengths\\n1. Innovative Architecture: CMAT's Actor-Critic design with dual-memory management (short-term context + long-term reflection) addresses critical gaps in LLM adaptability. The ablation study (Table 5) validates the necessity of both agent-specific and general instructions.\\n2. Efficiency Gains: TinyAgent-7B matches GPT-3.5's performance with 20x fewer parameters (Table 2), demonstrating the viability of SLMs in resource-constrained environments.\\n3. Rigorous Evaluation: The six-task benchmark (OS, DB, KG, ALFWorld, WebShop, M2W) provides a holistic assessment. The DB task analysis (Figure 3) effectively highlights the reflection mechanism's impact on error correction.\\n\\nWeaknesses and Recommendations\\n1. Limited Real-World Validation: Experiments rely on simulated environments (e.g., WebShop). Testing on live systems or industry datasets would strengthen claims about real-world applicability.\\n2. Narrow Baseline Comparison: While TinyAgent outperforms CodeLlama and Qwen, comparisons with other SLM frameworks (e.g., Microsoft's Phi-3) are absent.\\n3. Opaque Dataset Construction: Section 4.1 mentions \\\"self-collected methods\\\" but lacks details about data sources, cleaning protocols, or ethical considerations for crowd-sourced components.\", \"suggested_improvements\": \"1. Add latency/power consumption metrics to emphasize deployment advantages\\n2. Include error analysis for edge cases (e.g., nested SQL queries)\", \"rating\": \"7\", \"confidence\": \"3\"}" ] }
cYAFwjY2bY
Automatic Scientific Claims Verification with Pruned Evidence Graph
[ "Liri Fang", "Dongqi Fu", "Vetle I Torvik" ]
In general, automatic scientific claim verification methods retrieve evidence paragraphs, select rationale sentences, and predict the sentence stances for a given claim. Large language models (LLMs) are expected to be the next-generation tool to solve this task. However, due to the domain-specific claims, LLMs trained on the large-scale general corpus at least need external knowledge to warm up. Therefore, how to extract qualified and reasonable sentences with their stances toward a given claim is indispensable. GraphRAG is designed to learn the hierarchical relationships of context and selectively retrieve related information, improving LLMs’ reasoning in ad-hoc and domain-specific claim verification scenarios. Nevertheless, current GraphRAG methods typically require a pre-existing domain-specific knowledge base. Hence, a natural question can be asked: How far are we from automatically building a semantic graph and selecting rationale sentences for a pre-trained LLM, and which process is better to be independent of the pre-trained LLM? In this paper, we propose our ongoing research on distilling information across sentences by constructing a complete evidence graph and pruning it to capture the relevant connections between claim and paragraph sentences. This enables updating the sentence embeddings, and consequently enhances multiple-rationale sentence identification and stance prediction. The effectiveness of this proposed framework is empirically tested on SciFact, i.e., an open-access dataset in the biomedical domain. From the current stage, we discern that selected baselines, including our method, can hardly outperform across all experimental settings, which indicates many future research directions for researchers and practitioners.
[ "Scientific Claim Verification", "Evidence Graph Pruning", "Graph Neural Networks" ]
Accept (Poster)
https://openreview.net/pdf?id=cYAFwjY2bY
https://openreview.net/forum?id=cYAFwjY2bY
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "ditQ4EFCW8", "KviX7vlwjY" ], "note_type": [ "decision", "official_review" ], "note_created": [ 1745408036482, 1745407779865 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission38/Reviewer_egw2" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"comment\": \"Based on the meta-review, the decision is made to accept the paper for poster presentation.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Official Review for the Paper\", \"review\": \"The paper presents PrunE, a novel framework for scientific claim verification that combines pretrained language models with a pruned sentence-level evidence graph. The method first constructs a complete graph over evidence sentences paired with the claim, then prunes the graph using differentiable binary gates to retain only task-relevant connections. This pruned structure is then used to perform rationale sentence selection and stance prediction. The framework is evaluated on the SciFact dataset, where it shows competitive or superior performance to strong baselines. Ablation studies further validate the utility of the pruning mechanism and the graph construction process.\\n\\n> Strengths:\\n\\n1. The paper targets a timely and practically important problem: how to enhance domain-specific reasoning in pretrained language models for scientific verification without relying on pre-built external knowledge bases. The proposed use of graph-based structure learning, via pruning over complete evidence graphs, strikes a thoughtful balance between structure and learnability. The methodology is sound, and the incorporation of positional encoding in graph construction helps preserve rhetorical ordering, which is often crucial in scientific texts.\\n2. The empirical results are convincing. The paper benchmarks against strong baselines (KGAT, Paragraph-Joint, ARSJoint) and consistently improves key metrics, especially for combined rationale + stance prediction. The ablation study is a particular highlight, demonstrating that pruning is not just a regularizer but a necessary mechanism for improving signal-to-noise in graph construction.\\n3. The writing is clear and well-organized. The authors do a good job of situating their work within the context of prior methods and identifying the key limitations that motivate their approach. The figures and examples are informative and help concretize the core contributions.\", \"weaknesses\": \"1. The paper is focused exclusively on the SciFact dataset from the biomedical domain. While the motivation applies more broadly, the paper does not offer evidence of generalizability beyond this setting. It would strengthen the impact of the work to test transfer or at least discuss adaptation to other scientific domains.\\n2. There is limited discussion of failure cases. In particular, PrunE does not outperform all baselines in every setting, e.g., Paragraph-Joint achieves better results in rationale selection alone. Some qualitative analysis of where the pruning approach helps or hurts would provide useful insight into the model\\u2019s behavior.\\n3. Although the pruning mechanism is elegant and differentiable, there is little discussion of the computational cost or stability of the learned graph structures. It remains unclear how interpretable the pruned connections are or whether they remain consistent across runs.\\n4. The reliance on a complete graph at the start may raise scalability concerns as the size of abstracts or the number of evidence candidates increases. A brief discussion of inference time and model size would help assess real-world usability.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
bK20ivFGKE
Evaluation of a Robust Control System in Real-World Cable-Driven Parallel Robots
[ "Nurtdinov Damir", "Aliaksei Korshuk", "Alexei Kornaev", "Alexander Maloletov" ]
This study evaluates the performance of classical and modern control methods for real-world Cable-Driven Parallel Robots (CDPRs), focusing on underconstrained systems with limited time discretization. A comparative analysis is conducted between classical PID controllers and modern reinforcement learning algorithms, including Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO), Soft Actor-Critic (SAC), and Trust Region Policy Optimization (TRPO). The results demonstrate that TRPO outperforms other methods, achieving the lowest root mean square (RMS) errors across various trajectories and exhibiting robustness to larger time intervals between control updates. TRPO's ability to balance exploration and exploitation enables stable control in noisy, real-world environments, reducing reliance on high-frequency sensor feedback and computational demands. These findings highlight TRPO's potential as a robust solution for complex robotic control tasks, with implications for dynamic environments and future applications in sensor fusion or hybrid control strategies.
[ "Reinforcement Learning", "DDPG", "PPO", "TRPO", "PID", "control", "cable-driven robot" ]
Accept (Poster)
https://openreview.net/pdf?id=bK20ivFGKE
https://openreview.net/forum?id=bK20ivFGKE
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "zbL0Fp4LSg", "oNMs74gTCP", "0QasyiEIV3" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741147399570, 1740807638519, 1740959504292 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission18/Reviewer_4PmK" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission18/Reviewer_ajxj" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Strong Analysis and Reproducibility, but Lacks Real-World Validation and Efficiency Insights\", \"review\": \"Summary:\\nThis paper evaluates the performance of classical PID controllers and modern reinforcement learning algorithms for controlling Cable-Driven Parallel Robots (CDPR) in real-world conditions. The study highlights TRPO's stability, efficiency, and adaptability in dynamic environments, positioning it as a promising approach for advanced robotic control applications.\", \"strengths\": \"1. The study systematically evaluates classical PID controllers alongside state-of-the-art reinforcement learning techniques. This comparative analysis offers valuable insights into the strengths and weaknesses of both approaches in CDPR control.\\n2. Unlike previous works that focus on ideal simulation conditions, this study examines the impact of time discretization on control performance. The results indicate that TRPO can maintain stability even with lower sensor update rates, making it more applicable to real-world deployment.\\n3. The authors have released their code for reproducibility.\", \"weaknesses\": \"1. While the study is based on real-world CDPRs, the evaluation is entirely simulation-based. Adding physical experiments would significantly enhance credibility.\\n2. TRPO requires significantly higher computational resources than PID controllers or even DDPG/PPO. However, the paper does not discuss the training time, hardware requirements, or inference speed. This information is crucial for assessing the practical feasibility of deploying TRPO in real-time control systems.\\n3. The paper provides limited insight into how hyperparameter tuning affects RL performance. An ablation study on learning rates, exploration strategies, and reward function variations would be valuable.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Review of Robust Control System\", \"review\": \"# Summary:\\nThis work compares classical PID controllers with modern RL algorithms for CDPRs. The results show that TRPO outperforms the other methods, achieving the lowest RMS and demonstrating robustness to larger control update intervals. These findings suggest TPRO as a promising solution for complex robotic control tasks, with potential applications in various real-world environments.\\n\\n# Strengths:\\n\\n1. The work is well-organized and easy to follow.\\n2. The study highlights TRPO's various advantages, providing valuable insights into its effectiveness in robust control system.\\n3. This work conducts extensive experiments to thoroughly evaluate the performance of different control methods, providing a comprehensive analysis of their effectiveness in real-world scenarios.\\n\\n# Weaknesses:\\n\\n1. The paper has a relatively limited coverage of related work.\", \"rating\": \"6\", \"confidence\": \"4\"}" ] }
a8Cdxj3MjR
Emerging Multi-AI Agent Framework for Autonomous Agentic AI Solution Optimization
[ "Kamer Ali Yuksel", "Hassan Sawaf" ]
Agentic AI systems automate complex workflows but require extensive manual tuning. This paper presents a framework for autonomously optimizing Agentic AI solutions across industries, such as NLG-driven enterprise applications. It employs agents for Refinement, Execution, Evaluation, Modification, and Documentation, using iterative feedback loops powered by an LLM (Llama 3.2-3B). The system optimizes configurations without human input by autonomously generating and testing hypotheses, enhancing scalability and adaptability. Case studies demonstrate a significant boost in output quality, relevance, and actionability. Data, including original and evolved agent codes and outputs, are open-sourced.
[ "Agentic AI Systems", "Multi-Agent AI Optimization", "Iterative Refinement", "AI-driven Hypothesis Generation", "AI System Evaluation and Feedback", "Automated AI System Adaptation", "Self-Improving AI Agents", "Adaptive AI Architectures", "AI for Scientific Discovery", "Autonomous AI Systems", "AI-driven Experimentation", "Self-Evaluating AI", "AI Benchmarking & Standardization", "AI-generated Hypothesis Validation", "LLM-driven AI Optimization", "Multi-Agent Coordination & Collaboration", "Evolutionary AI Architectures", "AI-driven Workflow Optimization", "Trustworthy AI Systems", "AI Model Transparency & Interpretability", "AI Robustness & Error Mitigation", "Human-in-the-loop AI for Science", "AI Safety & Reliability" ]
Accept (Poster)
https://openreview.net/pdf?id=a8Cdxj3MjR
https://openreview.net/forum?id=a8Cdxj3MjR
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "oeHXR12Fsk", "lpbTMHc1sl", "f6VOGxlXDd", "C6IobLbZ5a" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1740905394488, 1741039262021, 1740959867845, 1741142865237 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission5/Reviewer_TiMc" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission5/Reviewer_BDs1" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission5/Reviewer_WfTn" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"EMERGING MULTI-AI AGENT FRAMEWORK FOR AUTONOMOUS AGENTIC AI SOLUTION OPTIMIZATION\", \"review\": \"1. Autonomous AI agent optimization framework using iterative LLM-driven refinement.\\n\\n2. Eliminates manual tuning by automatically generating, evaluating, and modifying agent roles.\\n\\n3. Uses specialized agents (Refinement, Execution, Evaluation, Modification, Documentation) for optimization.\\n\\n4. Case studies in market research, AI architecting, career transitions, outreach, and lead generation show effectiveness.\\n\\n5. Evolved systems outperform original versions, achieving higher clarity, relevance, and actionability scores.\\n\\n6. Scalable across multiple industries but primarily tested on NLG applications.\\n\\n7. Mitigates LLM feedback loop risks but still susceptible to LLM-induced biases and hallucinations.\\n\\n8. No comparisons against strong baselines like AutoGPT, BabyAGI, or RL-based agent optimizers.\\n\\n9. Computational cost of iterative LLM inferencing not addressed, raising efficiency concerns.\\n\\n10. Framework\\u2019s adaptability to non-text-based AI agents (vision, robotics) remains untested.\\n\\n11. Risk of reward hacking if LLM-generated evaluation criteria introduce self-reinforcing biases.\\n\\n12. Lacks hybrid human-AI evaluation\\u2014unclear if human oversight could enhance refinement quality.\\n\\n13. No statistical significance tests on performance gains, reducing confidence in generalizability.\\n\\n14. Could be strengthened by adversarial scoring models to prevent deceptive optimizations.\\n\\n15. Solid framework but needs efficiency validation and stronger empirical comparisons for ICLR acceptance.\", \"rating\": \"6\", \"confidence\": \"5\"}", "{\"title\": \"Emerging Multi-AI Agent Framework for Autonomous Agentic AI Solution Optimization\", \"review\": \"# 1. Summary\\n\\nThis paper presents a framework that leverages large language models to autonomously refine and optimize agent-based AI systems for enterprise applications. Specialized agents\\u2014Hypothesis, Modification, Execution, Evaluation, and Documentation\\u2014collaborate in iterative feedback loops until the performance improvement\\n\\n$$\\n|S_{i+1} - S_{best}| < \\\\epsilon,\\n$$\\n\\nfalls below a threshold. The method enhances scalability, adaptability, and overall performance.\\n\\n# 2. Strengths\\n\\n1. **Innovative Integration:** \\n The paper introduces a fresh approach by using LLM-driven feedback loops to let the system improve itself, resembling a team of experts that refines its own strategies over time.\\n\\n 2. **Clear Methodological Design:**\\n The framework clearly defines agent roles and iterative interactions with less technical complexity. \\n\\n 3. **Empirical Support:**\\n Case studies are based on real numbers and demonstrate specific improvements in major performance indicators.\\n\\n# 3. Weaknesses\\n1. **High Computational Overhead** \\n The iterative LLM feedback loop is resource-intensive, which may limit scalability \\n\\n2. **Risk of Evaluation Biases:** \\n Relying on LLM assessments could introduce biases that affect the reliability of the results.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Review\", \"review\": \"The paper presents an LLM-powered framework for optimizing Agentic AI systems through feedback loops. The paper has performed various case studies for their proposed framework.\", \"weakness\": [\"While the paper discusses related work in L33-48, it should make it more clear how the proposed approach differs from previous work that also incorporates iterative design.\", \"The problem statement is not well introduced and could be framed more clearly to establish the motivation and significance of the study.\", \"The methodology lacks crucial details, such as the prompts used for different agents. The design principle behind the prompt is missing, and it remains unclear whether prompt engineering is required for every task.\", \"Regarding evaluation, it appears that the scores in Figure 1 are generated by the evaluation agent? If so, given that the backbone model is Llama 3.2-3B, it is difficult to assess the reliability of these scores. Incorporating additional standard metrics or human evaluation would strengthen the credibility of the results.\", \"The paper would also benefit from the evaluation of established benchmarks, such as MLAgentBench, by comparing the performance of the original and refined models.\", \"Text overlaps in Figure 2\", \"The paper claims that the proposed method enhances efficiency, but no experiments have been conducted to support this statement.\"], \"rating\": \"3\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}" ] }
ZKl1a1Dmvk
HEP-JEPA: A foundation model for collider physics
[ "Jai Bardhan", "Radhikesh Agrawal", "Abhiram Tilak", "Cyrin Neeraj", "Subhadip Mitra" ]
We present a transformer architecture-based foundation model for tasks at high-energy particle colliders such as the Large Hadron Collider. We train the model to classify jets using a self-supervised strategy inspired by the Joint Embedding Predictive Architecture. We use the JetClass dataset containing 100M jets of various known particles to pre-train the model with a data-centric approach --- the model uses a fraction of the jet constituents as the context to predict the embeddings of the unseen target constituents. Our pre-trained model fares well with other datasets for standard classification benchmark tasks. We test our model on two additional downstream tasks: top tagging and differentiating light-quark jets from gluon jets. We also evaluate our model with task-specific metrics and baselines and compare it with state-of-the-art models in high-energy physics. Therefore, this work contributes to the development of scientific foundation models by demonstrating how self-supervised transformer architectures can extract deep insights from high-energy physics data.
[ "foundation model", "self-supervised learning", "particle physics", "high energy physics", "jepa" ]
Accept (Poster)
https://openreview.net/pdf?id=ZKl1a1Dmvk
https://openreview.net/forum?id=ZKl1a1Dmvk
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "zBZIdPiVRo", "u5BvnV2zIa", "FH1Vr2Z1T8", "2hlxYoRzrt" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1741030263973, 1741129455543, 1741030840063, 1741143771720 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission36/Reviewer_8ezm" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission36/Reviewer_stpt" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission36/Reviewer_ubnq" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"review\", \"review\": \"The paper presents HEP-JEPA, a foundation model based on transformer architectures for tasks in high-energy physics, specifically at particle colliders such as the Large Hadron Collider. The model employs a self-supervised learning strategy inspired by the Joint Embedding Predictive Architecture, trained on the JetClass dataset. HEP-JEPA is compared against state-of-the-art models in the field with task-specific metrics and baselines. The paper also includes ablation studies to analyze the impact of different architectural choices and training strategies.\\n\\nMy main concerns about this paper are related to the experimental results, specifically the following points.\\n\\n1.\\tFew-Shot Learning Performance. Although the fine-tuned HEP-JEPA model consistently outperforms the model trained from scratch (Table 1), in few-shot learning tasks (0.05%- and 0.5%-label cases), the accuracy is around 0.5, which is close to random results. This raises concerns about whether the experimental results are meaningful for real-world problems where reliable performance is required even with limited labeled data.\\n\\n2.\\tLimited Improvement Over Other Methods. The paper compares HEP-JEPA with baseline supervised models and existing physics foundation models. While the fine-tuned HEP-JEPA achieves better accuracy than the supervised model trained from scratch and the frozen HEP-JEPA, the improvement is relatively small. Additionally, state-of-the-art models such as PARTICLENET and PART outperform HEP-JEPA by a significant margin. The authors should provide further analysis to clarify whether the increased complexity of JEPA-based training is effective.\\n\\n3.\\tComparison with More Methods. HEP-JEPA is a self-supervised based method, are there other self-supervised approaches in this field? In addition, the related work section mentions techniques such as Masked Particle Modelling and OmniJet-$\\\\alpha$, but there is no empirical comparison with these methods. \\n\\n4.\\tAblation Study. Some architecture in the ablation study shows only minor improvements or even no improvement (e.g., physics-inspired enhancements).\\n\\n5.\\tComputational Scalability. The training process requires significant GPU hours (320 hours on RTX 2080Ti), making it computationally expensive. How does this scale to larger datasets and practical deployment scenarios?\", \"rating\": \"4\", \"confidence\": \"3\"}", "{\"title\": \"Review for HEP-JEPA\", \"review\": [\"The paper introduces HEP-JEPA, a novel transformer-based foundation model designed for collider physics tasks. It adapts the Joint Embedding Predictive Architecture (JEPA) paradigm to learn high-level representations of particle jets. Pre-trained on the extensive JetClass dataset, the model is evaluated on key downstream tasks such as jet classification, top tagging, and quark\\u2013gluon discrimination. By leveraging self-supervised learning and innovative tokenization/masking strategies (including physics-inspired biases), HEP-JEPA demonstrates competitive performance, particularly in few-shot learning settings, and shows promise for more scalable and data-efficient analyses in high-energy physics.\", \"The paper exhibits strong quality through its rigorous methodology, comprehensive experimental setup, and detailed ablation studies that justify various design choices. It shows sufficient clarity with a systematic presentation of the model\\u2019s architecture, the pre-training paradigm, and the downstream evaluations. Adapting the JEPA framework to the challenging domain of collider physics is a novel approach, especially given the integration of physics-specific components within a transformer architecture.\", \"### Pros:\", \"This paper applies the JEPA paradigm from computer vision to high-energy physics in a novel way.\", \"This paper demonstrates effectiveness through thorough experiments on jet classification, top tagging, and quark-gluon discrimination.\", \"This work provides extensive ablation studies that offer insights into the effects of different model components and design choices.\", \"The paper is well-structured, with detailed descriptions of methodology, experimental setups, and results.\", \"### Cons:\", \"In certain downstream tasks, the absolute performance improvement over specialized supervised models is limited.\", \"Heavy reliance on the JetClass dataset might restrict direct applicability to real experimental data without further validation.\", \"The training process, while efficient in few-shot regimes, still requires substantial computational resources (e.g., 320 GPU hours), which could be a barrier for smaller research groups.\"], \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"HEP-JEPA: Good integration of domain knowledge, Lack novel method and significant experimental results\", \"review\": [\"Focus on jets classification task, the authors explores how self-supervised learning can learn deep insights from the dataset. The authors propose HEP-JEPA framework to conduct self-supervised pretraining for jets classfication. The framework includes physical insights to enhance the performance. Experimental results show that the pretraining can greatly increase the training efficiency, benefiting to final results.\", \"## Strengths:\", \"This model integrated several physical insights for framework design.\", \"The ablation study show that the proposed constraints, ie, physical bias and registers can robust the results.\", \"## Weaknesses:\", \"The proposed framework is not novel. It is a simple adaptation from existing SSL frameworks.\", \"How the SSL performs are not compared. The authors use few shot learning to demonstrate the effectiveness of SSL. However, comparing few shot learning with training from scratch cannot directly demonstrate the effectiveness of SSL.\", \"Compared to SOTAs, the results are not significant.\", \"Include abbreviation for FPS when it first appears.\", \"f_{\\\\theta} is used without definition. Please include an overview introduction of encoder.\"], \"rating\": \"6\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}" ] }
XFC8Ddg7Dh
Game-Theoretic Multi-Agent Collaboration for AI-Driven Scientific Discovery
[ "Sandeep Ravindra Tengali" ]
This paper introduces a game-theoretic multi-agent AI framework where autonomous AI agents negotiate and refine hypotheses in either a cooperative or competitive scientific environment. By leveraging tools from Nash equilibrium analysis and cooperative game theory, agents can independently validate scientific hypotheses, manage shared computational resources, and optimize discovery pathways. Experimental results in climate modeling, astrophysics, and biomedical research show that this agentic AI approach significantly accelerates scientific exploration while providing robust conflict resolution among heterogeneous domain tasks. Our findings highlight both the theoretical foundations of multi-agent negotiation for scientific hypothesis generation and the practical potential to transform decentralized scientific collaborations.
[ "Multi-Agent Systems", "Game Theory", "Scientific AI", "Nash Equilibrium", "Cooperative Bargaining", "AI for Science", "HPC Scheduling", "AI Collaboration" ]
Reject
https://openreview.net/pdf?id=XFC8Ddg7Dh
https://openreview.net/forum?id=XFC8Ddg7Dh
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "e3yHWPlc26", "cEf2zmNM6P", "1VsxfIZcGu", "0ktxIOM9wo" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741147362054, 1740898139811, 1741110463224, 1741145668170 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission17/Reviewer_VZ9F" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission17/Reviewer_NATZ" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission17/Reviewer_7U8n" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"Lack of Intellectual Engagement and Scientific Rigor\", \"review\": \"Summary:\\n\\nThis paper proposes a game-theoretic multi-agent AI framework for scientific discovery, where AI agents negotiate hypotheses and manage computational resources using Nash equilibrium and cooperative game theory. The authors claim that their approach has been tested in climate modeling, astrophysics, and biomedical research, demonstrating improvements in hypothesis validation and resource efficiency. However, the manuscript primarily consists of bullet points rather than fully developed text, raising serious concerns regarding its originality, coherence, and scientific rigor.\", \"weaknesses\": \"1. The manuscript is largely composed of bullet points, lacking detailed explanations and complete sentences. This severely impacts readability and prevents a thorough assessment of its depth, clarity, and scientific contribution.\\n\\n2. Several critical components typically expected in a scientific paper are either missing or ineffective, including a well-structured Introduction, a clear Problem Statement, a thorough Related Work review, detailed Implementation and Experimental Methods, and a meaningful Result Analysis. The absence of these sections gives the impression of minimal human effort and intellectual engagement.\\n\\n3. A manual verification of citations revealed significant discrepancies. For instance, the first reference in the bibliography\\u2014 Zhu, Y., Morales, S., & Kim, M. (2021). Single-Agent vs. Multi-Agent Paradigms in Scientific AI. ACM Surveys on AI, 4(2), 101\\u2013120.\\u2014could not be found despite extensive searching. Furthermore, the cited venue, ACM Surveys on AI, appears to be non-existent, raising serious concerns about the validity and authenticity of the references.\\n\\nOverall, this paper raises significant concerns regarding potential AI-generated content. Even if human-authored, it suffers from fundamental flaws in theoretical grounding, empirical validation, and scientific contribution, making it unsuitable for publication.\", \"rating\": \"2\", \"confidence\": \"5\"}", "{\"title\": \"Game-Theoretic Multi-Agent Collaboration for AI-Driven Scientific Discovery\", \"review\": \"This paper presents a game-theoretic multi-agent AI framework for scientific discovery, where autonomous agents negotiate and refine hypotheses in competitive and cooperative settings. By incorporating Nash equilibrium and cooperative bargaining, the system effectively optimizes resource allocation and conflict resolution across diverse domains such as climate modeling, astrophysics, and biomedical research. However, the paper lacks clarity and depth, requiring more comprehensive details in each section. A more detailed definition of the Comparison Metrics will be helpful to better understand it. The Performance Analysis section needs a more thorough discussion to strengthen the evaluation. Additionally, while the framework enhances scientific collaboration, it faces challenges such as high computational complexity, uncertain synergy estimation, and scalability limitations beyond 50 agents.\", \"rating\": \"3\", \"confidence\": \"3\"}", "{\"title\": \"Official Review\", \"review\": \"This paper presents a game-theoretic multi-agent AI framework for scientific discovery, where autonomous AI agents collaborate or compete to generate and refine hypotheses. The system models resource allocation and hypothesis validation as strategic games, using Nash equilibrium for competitive scenarios and cooperative bargaining for collaborative ones. The approach is applied to climate modeling, astrophysics, and biomedical research, showing that it improves resource efficiency, scientific output, and negotiation stability. Experimental results demonstrate that this framework enhances computational resource sharing, accelerates hypothesis validation, and reduces conflicts in decentralized scientific collaborations.\", \"strengths_of_the_paper\": \"1. Provides a structured approach to resolving conflicts in scientific collaborations and uses both competitive and cooperative strategies, adapting to different research scenarios.\", \"weaknesses_of_the_paper\": \"1. The paper is not written properly. The discussion is very brief.\\n2. Computational complexity increases with more agents, making large-scale applications difficult.\", \"rating\": \"4\", \"confidence\": \"4\"}" ] }
Ulwyv3mrQ2
ML-Dev-Bench: Comparative Analysis of AI Agents on ML development workflows
[ "Harshith Padigela", "Chintan Shah", "Dinkar Juyal" ]
In this report, we present ML-Dev-Bench, a benchmark aimed at testing agentic capabilities on applied Machine Learning development tasks. While existing benchmarks focus on isolated coding tasks or Kaggle-style competitions, ML-Dev-Bench tests agents' ability to handle the full complexity of ML development workflows. The benchmark assesses performance across critical aspects including dataset handling, model training, improving existing models, debugging, and API integration with popular ML tools. We evaluate three agents - ReAct, Openhands, and AIDE - on a diverse set of 30 tasks, providing insights into their strengths and limitations in handling practical ML development challenges. We open source the benchmark for the benefit of the community.
[ "AI agents", "benchmark", "llms" ]
Reject
https://openreview.net/pdf?id=Ulwyv3mrQ2
https://openreview.net/forum?id=Ulwyv3mrQ2
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "ft0qG2b5i3", "dWryHyocSI", "943C7iMsM1", "5DKyMcZqUJ" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741143812457, 1741020152892, 1741128798159, 1741027901702 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission35/Reviewer_vKoR" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission35/Reviewer_icoR" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission35/Reviewer_NZE1" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"review\", \"review\": \"The paper presents ML-Dev-Bench, a benchmark designed to evaluate AI agents on full ML development workflows, covering tasks such as dataset handling, model training, debugging, API integration, model implementation, and performance optimization. The paper shows experimental results on 30 tasks to evaluate three agents ReAct, OpenHands, and AIDE.\", \"my_concerns_about_this_paper_are_as_follows\": \"1.\\tThe introduction is relatively brief and does not sufficiently introduce the research problem. It would benefit from a clearer explanation of the challenges in ML development that AI agents face, as well as a more structured discussion of existing research gaps to help readers better understand the motivation behind this benchmark.\\n\\n2.\\tIn lines 34-36 of the introduction, the authors mention that previous work failed to 'integrate with third-party tools, debug complex issues that span multiple components of the ML pipeline, and balance trade-offs like model performance and cost to come up with optimal design.' However, it is unclear how the proposed benchmark directly addresses these specific challenges.\\n\\n3.\\tAs a benchmark paper, the study evaluates only three agents (ReAct, OpenHands, and AIDE). Incorporating additional agents, such as ResearchAgent [1], and testing against a broader range of LLMs, including models like Gemini, would provide a more comprehensive evaluation of current AI capabilities.\\n[1] Qian Huang, Jian Vora, Percy Liang, and Jure Leskovec. MLAgentBench: Evaluating Language Agents on Machine Learning Experimentation. In Forty-first International Conference on Machine Learning, June 2024.\\n\\n4.\\tThe benchmark relies on a binary success/failure metric, which may not capture the task difficulty and partial success. According to Table 3, all agents (ReAct-Sonnet, OHSonnet, Aide-4o, and ReAct-4o) fail on 15 out of 30 tasks, and none succeed in performance optimization tasks, making it difficult to draw meaningful conclusions about their comparative effectiveness. I suggest a fine-grained metric to evaluate them.\\n\\n5.\\tSince LLM-based agents exhibit stochastic behavior, their performance can vary across multiple runs. However, the paper does not report results on repeated runs.\", \"rating\": \"3\", \"confidence\": \"4\"}", "{\"title\": \"Review\", \"review\": [\"This paper introduces ML-Dev-Bench, a novel benchmark designed to evaluate AI agents on the entire spectrum of machine learning development workflows, which encompasses real-world tasks such as dataset handling, model training, debugging, model implementation, API integration, and performance optimization. The benchmark comprises 30 carefully designed tasks and evaluates three different agents, providing a detailed analysis of their strengths and limitations in addressing practical ML development challenges. Additionally, the paper presents an evaluation framework called Calipers, which standardizes the testing process and quantifies performance through binary success metrics, token cost analyses, and comprehensive task breakdowns.\", \"The paper presents a well-motivated and clearly structured benchmark that fills an important gap in current evaluations of AI agents for ML development. Its clarity is shown by detailed task descriptions and systematic categorization. In terms of originality, ML-Dev-Bench offers a fresh perspective by moving beyond isolated coding tasks to assess the end-to-end challenges inherent in real-world ML workflows, which is a contribution that is both novel and timely.\", \"### Pros:\", \"This paper introduces a comprehensive evaluation framework tailored to real-world ML development workflows.\", \"This paper evaluates multiple AI agents, offering insights into their performance across diverse task types.\", \"Includes token cost analysis and granular success rates, which add depth to the evaluation.\", \"### Cons:\", \"Additional insights into common failure modes and error patterns would strengthen the overall evaluation.\", \"Could benefit from a more in-depth discussion of potential limitations and directions for future work.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Review for ML-Dev-Bench\", \"review\": \"ML-DEV-BENCH benchmarked 3 agents (ReAct, Openhands, and AIDE) on ML development workflows consisting of 30 tasks. These tasks including Dataset Handing, Model Training, Debugging, Model Implemetation, API Integration and Performance. The results indicates that Openhands with Claud Sonnet perform best.\", \"strengths\": \"The included tasks are comprehensive, which contains 30 tasks with 6 categories. \\n\\nThe results can clearly show the results.\", \"weaknesses\": [\"Insight: the insight of the design of the 30 tasks is not clear. The questions like why they contain these tasks, what specific challenges they tend to address, why these categories are important and included, are not clearly presented.\", \"Results: The results only shows which method performs better, without deep analysis on it.\", \"The authors only contains 3 Comparison Agents, which is not enough to obtain a robust results.\", \"The results should contain more figures but not only tables, which can increase the readability.\"], \"rating\": \"3\", \"confidence\": \"4\"}" ] }
UeeyfR4CUg
AgenticHypothesis: A Survey on Hypothesis Generation Using LLM Systems
[ "Adib Bazgir", "Rama chandra Praneeth Madugula", "Yuwen Zhang" ]
Hypothesis generation is a cornerstone of scientific discovery, enabling researchers to formulate testable ideas that drive innovation and progress. Although traditional methods have proven effective, they are often constrained by cognitive biases, information overload, and time limitations. Recently, the emergence of Large Language Models (LLMs) has introduced a promising approach to automate and enhance hypothesis generation. Trained on extensive datasets, these models have demonstrated the potential to generate novel, testable hypotheses across various domains. This review critically examines state-of-the-art methodologies for LLM-based hypothesis generation, including Retrieval Augmented Generation (RAG), multi-agent frameworks, and iterative refinement techniques. Applications in biomedical research, materials science, product innovation, and interdisciplinary studies are discussed, highlighting both the versatility and the impact of these systems. Despite their promise, LLMs also present challenges such as hallucinations, data biases, and ethical concerns, which necessitate careful implementation and oversight. Future research directions include refining model architectures, integrating multimodal capabilities, and establishing robust ethical frameworks to optimize the use of LLMs in scientific research. Overall, this review provides a balanced overview of how LLMs may enhance hypothesis generation while also addressing the associated challenges. The GitHub repository containing open-source LLM-empowered hypothesis generation systems is available at https://github.com/adibgpt/AgenticHypothesis.
[ "Hypothesis Generation", "Large Language Models (LLMs)", "Retrieval Augmented Generation (RAG)", "Multi-agent Frameworks", "Iterative Refinement Techniques", "Scientific-domain Applications", "Evaluation Metrics" ]
Accept (Poster)
https://openreview.net/pdf?id=UeeyfR4CUg
https://openreview.net/forum?id=UeeyfR4CUg
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "wdy33cNqQM", "oVIMGou9CO", "h8wQTggtnn", "fPG81EHTzV" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740698447122, 1741143561771, 1740958193550, 1741041790879 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission20/Reviewer_RaVo" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission20/Reviewer_cGG8" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission20/Reviewer_4hMs" ] ], "structured_content_str": [ "{\"title\": \"Hypothesis Generation with Large Language Models: Opportunities and Challenges\", \"review\": \"# Summary:\\nThis survey provides a comprehensive overview of current approaches utilizing LLMs for hypothesis generation. The authors present a taxonomy of existing approaches and analyze their advantages, limitations, and potential future directions.\\n\\n# Strength:\\n1. The survey is well-structured, making it easy to follow.\\n2. The future directions highlight promising research avenues.\\n\\n# Weakness:\\n1. The review of existing methods can be more detailed rather than just listing them. E.g. , in section 4.2 about hallucinations, a deeper discussion on how this issue has been addressed, its bottleneck, and potential solutions would be beneficial.\\n2. A summary of commonly used datasets in this area would enhance the survey's completeness and usability.\\n3. Section 4.1 only mentions metrics but does not provide insights into the current SOTA performance. Including a comparison of top-performing methods would strengthen the survey\\u2019s analysis.\", \"rating\": \"5\", \"confidence\": \"5\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of AgenticHypothesis\", \"review\": \"Summary:\\nThis survey examines LLM-based hypothesis generation, highlighting methods like Retrieval-Augmented Generation (RAG) and multi-agent frameworks. It also discusses applications in various domains such as biomedical research, materials science, etc. While LLMs enhance discovery, challenges like biases and hallucinations persist. Future directions include refining architectures, integrating multimodal data, and strengthening ethical safeguards for responsible implementation.\", \"strengths\": \"1) This paper provides a comprehensive review of LLM-based hypothesis generation, covering key methodologies like Retrieval-Augmented Generation (RAG), multi-agent frameworks, and iterative refinement techniques.\\n2) The review does not just highlight the benefits of LLMs but also shows their limitations, such as hallucinations, biases, and ethical concerns, providing a well-rounded analysis.\\n3) By identifying future research directions, the paper offers valuable guidance for advancing LLM-based hypothesis generation.\", \"weaknesses\": \"1) The paper primarily summarizes existing methods but lacks case studies or deeper insights into the effectiveness of different LLM-based hypothesis generation approaches.\\n2) Although the paper mentions challenges like hallucinations and biases, it does not dive deeply into mitigation strategies or real-world case studies.\", \"rating\": \"5\", \"confidence\": \"5\"}", "{\"title\": \"AgenticHypothesis: A Survey on Hypothesis Generation Using LLM Systems\", \"review\": \"Paper Summary:\\n\\nThe paper hypothesizes that LLM-based systems can overcome the limitations of traditional hypothesis-generation methods by leveraging advanced reasoning techniques, modular workflows, and structured methodologies to generate novel, scientifically rigorous hypotheses across diverse fields.\", \"strengths\": [\"Structured Methodological Analysis: The survey effectively categorizes different methodological approaches (iterative refinement, RAG-based, multi-modal frameworks), making a complex field more accessible.\", \"Multi-Domain Application Review: The paper effectively showcases how LLM-based hypothesis generation works across diverse scientific fields.\", \"Balanced Evaluation Framework: The survey offers a multidimensional framework for evaluating hypothesis quality, providing comprehensive assessment criteria.\"], \"weaknesses\": [\"The paper acknowledges hallucination challenges but fails to explore mitigation strategies across different systems sufficiently.\", \"The review overemphasizes fully automated systems while neglecting hybrid approaches where scientists and LLMs collaborate to generate hypotheses.\", \"The paper lacks quantitative comparisons between LLM-based hypothesis systems, hindering objective assessment of their relative effectiveness.\"], \"questions_to_authors\": \"See above.\", \"rating\": \"6\", \"confidence\": \"3\"}" ] }
TyCYakX9BD
Agentic AI for Scientific Discovery: A Survey of Progress, Challenges, and Future Directions
[ "Mourad Gridach", "Jay Nanavati", "Christina Mack", "Khaldoun Zine El Abidine", "Lenon Mendes" ]
The integration of Agentic AI into scientific discovery marks a new frontier in research automation. These AI systems, capable of reasoning, planning, and autonomous decision-making, are transforming how scientists perform literature review, generate hypotheses, conduct experiments, and analyze results. This survey provides a comprehensive overview of Agentic AI for scientific discovery, categorizing existing systems and tools, and highlighting recent progress across fields such as chemistry, biology, and materials science. We discuss key evaluation metrics, implementation frameworks, and commonly used datasets to offer a detailed understanding of the current state of the field. Finally, we address critical challenges, such as literature review automation, system reliability, and ethical concerns, while outlining future research directions that emphasize human-AI collaboration and enhanced system calibration.
[ "Agentic AI; Scientific Discovery; Literature Review" ]
Accept (Poster)
https://openreview.net/pdf?id=TyCYakX9BD
https://openreview.net/forum?id=TyCYakX9BD
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "hLMTTuFULV", "gqn58aQEg9", "XUrJ4onHwH" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741147434849, 1740960862009, 1740938065382 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission21/Reviewer_zqmK" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission21/Reviewer_P6F5" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of the Survey Paper\", \"review\": \"Summary:\\nThis survey examines Agentic AI in scientific discovery, covering literature review automation, hypothesis generation, and experimentation. It reviews progress in chemistry and biology, discusses evaluation metrics, and addresses challenges like system reliability and ethics. Future directions focus on human-AI collaboration and improved system calibration for enhanced research effectiveness.\", \"strengths\": \"1) This paper provides a broad survey of existing Agentic AI systems, frameworks, and methodologies across multiple disciplines, including chemistry, biology, and materials science.\\n2) This paper outlines key implementation tools, datasets, and evaluation metrics, offering practical insights for researchers looking to develop or assess Agentic AI systems.\\n3) While highlighting the advancements of Agentic AI, this paper also critically discusses its limitations, such as challenges in automating literature reviews, concerns about system reliability, and ethical considerations. It emphasizes the importance of human-AI collaboration and system calibration to improve reliability.\", \"weaknesses\": \"1) This paper reviews various Agentic AI systems but does not provide a quantitative comparison of their effectiveness, scalability, or robustness. A benchmark study comparing different methods on common tasks would strengthen its contribution.\\n2) Although this paper discusses implementation tools and frameworks, it does not provide detailed guidance on how researchers can practically integrate Agentic AI into their workflows. More case studies or implementation examples would enhance its usability.\\n3) Figure 1 is referenced on page 4 but appears on page 6, making it difficult for readers to follow the discussion. Consider placing the figure closer to its first mention by using a long-format workflow graph.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"a comprehensive survey\", \"review\": \"This survey provides a comprehensive overview of Agentic AI for scientific discovery, categorizing existing systems and tools, and highlighting recent progress across fields such as chemistry, biology, and material science. The paper also discussed key evaluation metrics, implementation frameworks, and commonly used datasets to offer a detailed understanding of the current state of the field.\\n\\nSome parts of the paper are a little hard to follow. The authors can include more comparison tables/figures to assist the reader's understanding\", \"rating\": \"6\", \"confidence\": \"4\"}" ] }
TZ0RvZ8pw7
APPA : Agentic Preformulation Pathway Assistant
[ "Julius Lange", "Leonid Komissarov", "Nicole Wyttenbach", "Andrea Anelli" ]
The design and development of effective drug formulations is a critical process in pharmaceutical research, particularly for small molecule active pharmaceutical ingredients. This paper introduces a novel agentic preformulation pathway assistant (APPA), leveraging large language models coupled to experimental databases and a suite of machine learning models to streamline the preformulation process of drug candidates. APPA successfully integrates domain expertise from scientific publications, databases holding experimental results, and machine learning predictors to reason and propose optimal preformulation strategies based on the current evidence. This results in case-specific user guidance for the developability assessment of a new drug and directs towards the most promising experimental route, significantly reducing the time and resources required for the manual collection and analysis of existing evidence. The approach aims to accelerate the transition of promising compounds from discovery to preclinical and clinical testing.
[ "LLM", "Preformulation Sciences", "Drug Development", "Pharmaceutical Research", "Agentic Workflows", "Machine Learning" ]
Accept (Poster)
https://openreview.net/pdf?id=TZ0RvZ8pw7
https://openreview.net/forum?id=TZ0RvZ8pw7
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "vx9x4SaqDs", "pCRJ6EQbjo", "emBv2YvgZF", "Dj3bvJqHZI" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1741077913591, 1741143158729, 1740905922504, 1740938616958 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission22/Reviewer_XNWC" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission22/Reviewer_ZuX4" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission22/Reviewer_9PCH" ] ], "structured_content_str": [ "{\"title\": \"Is it necessary to rely on LLM to control the workflow in this DCS classification task?\", \"review\": \"**Paper Summary:**\\n\\nThis paper introduces an agent system designed to assist in the preformulation process. The system responds to user queries by retrieving experimental data and utilizing existing machine learning models to predict various physicochemical properties and assay outcomes, thereby facilitating efficient and informed decision-making during the preformulation stage.\\n\\n**Strengths:**\\n\\nThe proposed agent system has the potential to accelerate the preformulation process.\\n\\n**Weaknesses:**\\n\\n- **W1:** The experimental validation relies on only four samples, which is insufficient for a comprehensive evaluation of the framework's efficacy. To ensure robustness and reliability, it is recommended to test at least 500 samples.\\n\\n- **W2:** The claim that the agent system suggests meaningful next steps in formulating compounds lacks quantitative evidence. Providing detailed, quantitative results would help substantiate the system's effectiveness in this regard.\\n\\n- **W3:** The paper does not offer a comparison between the proposed agent framework and existing model-based DCS classification methods[1], such as those discussed by Lange et al. (2024b). A comparison would provide valuable context on the performance and advantages of the new framework.\\n\\n- **W4:** The authors should provide a clearer rationale for opting for an LLM-based agent system over a traditional system. Given the simplicity of the workflow illustrated in Figure 1 and the limited number of tools involved, it is unclear why a traditional system would not suffice. In other words, is it necessary to rely on LLM to control the workflow in this DCS classification task? This decision should be justified with experimental evidence to support the choice of an LLM-based agent framework.\\n\\n\\n\\n[1] Justus Johann Lange, Andrea Anelli, Jochem Alsenz, Martin Kuentz, Patrick J O\\u2019Dwyer, WiebkeSaal, Nicole Wyttenbach, and Brendan T Griffin. Comparative analysis of chemical descriptorsby machine learning reveals atomistic insights into solute\\u2013lipid interactions. Molecular Pharma-ceutics, 2024b.\", \"rating\": \"5\", \"confidence\": \"3\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\", \"comment\": \"The authors have submitted a revised version accommodating reviewer comments. Based on the second round of reviews, the decision has been made to accept the paper for poster presentation.\"}", "{\"title\": \"APPA: AGENTIC PREFORMULATION PATHWAY ASSISTANT\", \"review\": \"1. Introduces APPA, an LLM-driven agentic framework for preformulation pathway optimization in drug development.\\n\\n2. Integrates scientific literature, experimental databases, and ML models to assist in drug formulation decisions.\\n\\n3. Leverages Developability Classification System (DCS) to suggest optimal formulation strategies.\\n\\n4. Automatically classifies drug candidates and recommends next steps using solubility, permeability, and physicochemical data.\\n\\n5. Uses retrieval-augmented generation (RAG) and Langchain framework to ensure scientifically grounded outputs.\\n\\n6. Outperforms standard GPT-4o in drug classification tasks, providing actionable insights beyond retrieval-based LLMs.\\n\\n7. Supports multi-step query chains, enabling dose optimization, solubility comparisons, and experimental decision guidance.\\n\\n8. Bridges the gap between manual preformulation workflows and AI-driven automation in pharmaceutical R&D.\\n\\n9. Evaluation limited to in silico performance\\u2014no validation with real-world experimental formulations.\\n\\n10. Computational efficiency not addressed\\u2014unclear impact of APPA\\u2019s iterative reasoning on speed and resource usage.\\n\\n11. No comparison against existing AI-driven formulation tools\\u2014unclear how APPA stacks against proprietary pharma solutions.\\n\\n12. Potential LLM hallucination risks\\u2014may generate plausible yet incorrect formulation recommendations.\\n\\n13. Lacks human-in-the-loop validation\\u2014no mechanism to verify APPA\\u2019s reasoning against expert formulation scientists.\\n\\n14. Needs better interpretability in decision-making\\u2014unclear why certain formulations are prioritized over others.\\n\\n15. Promising application but requires further benchmarking, efficiency analysis, and experimental validation.\", \"rating\": \"6\", \"confidence\": \"5\"}", "{\"title\": \"hard to follow the result\", \"review\": \"This paper introduces an agentic preformulation pathway assistant (APPA), leveraging large language models coupled to experimental databases and a suite of machine learning models to streamline the preformulation process of drug candidates. APPA integrates domain expertise from scientific publications, databases holding experimental results, and machine learning predictors to reason and propose optimal preformulation strategies based on the current evidence.\\n\\nThe experimental results presented in the paper are hard to follow and judge for non-domain experts. The author should consider to include some qualitative evaluations.\", \"rating\": \"4\", \"confidence\": \"1\"}" ] }
T2mtCFKIEG
ML-Bench: Evaluating Large Language Models and Agents for Machine Learning Tasks on Repository-Level Code
[ "Xiangru Tang", "Yuliang Liu", "Zefan Cai", "Daniel Shao", "Junjie Lu", "Yichi Zhang", "Zexuan Deng", "Helan Hu", "Kaikai An", "Ruijun Huang", "Shuzheng Si", "Chen Sheng", "Haozhe Zhao", "Liang Chen", "Tianyu Liu", "Yujia Qin", "Wangchunshu Zhou", "Yilun Zhao", "Zhiwei Jiang", "Baobao Chang", "Arman Cohan", "Mark Gerstein" ]
Despite Large Language Models (LLMs) achieving impressive results in code generation, significant challenges remain in automated ML development, particularly in utilizing existing ML repositories effectively. Also, recently, people have developed LLM agents that attempt to interact with repository code (e.g., resolving issues), prompting the need for end-to-end evaluations starting from environment setup to deploying the repository rather than merely generating code in already-configured environments. These two gaps have motivated our development of ML-Bench, a benchmark rooted in real-world ML applications that leverage existing code repositories. ML-Bench encompasses annotated 9,641 examples across 18 GitHub repositories, challenging LLMs to accommodate user-specified arguments and documentation intricacies effectively. To evaluate both LLMs and agents, two setups are employed: ML-Bench-L for assessing LLMs' text-to-code conversion within a predefined deployment environment, and ML-Bench-A for testing autonomous agents in an end-to-end task execution within a Linux sandbox environment. Our findings indicate that while GPT-4o leads with a Pass@5 rate surpassing 50%, there remains significant scope for improvement, highlighted by issues such as hallucinated outputs and difficulties with bash script generation. Notably, in the more demanding ML-Agent-Bench, GPT-4o achieves a 76.47% success rate, reflecting the efficacy of iterative action and feedback in complex task resolution. Our resources, including code, data, and models, are available at \url{https://anonymous.4open.science/r/ML-Bench}.
[ "LLMs", "Code Generation", "Agents" ]
Accept (Oral)
https://openreview.net/pdf?id=T2mtCFKIEG
https://openreview.net/forum?id=T2mtCFKIEG
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "dCCBVlCkSF", "d7EXe3Iw20", "cbepOezGJu" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741143726469, 1741068511102, 1741034044445 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission37/Reviewer_LTit" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission37/Reviewer_p3y8" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Need to justify the model selection for fine-tuning experiments\", \"review\": \"**Paper Summary**\\n\\nThis paper presents ML-Bench, a pioneering benchmarking dataset designed to assess the capabilities of Large Language Models (LLMs) in utilizing existing machine learning code repositories to perform tasks such as Image Captioning in real-world scenarios. The authors propose two distinct benchmarking settings. The first setting involves providing LLMs with a code repository and specific instructions to test their ability to generate correct bash or Python codes for solving a given task. The second setting adds complexity by requiring LLMs to download necessary datasets and install any missing packages, effectively simulating the role of a human coder. The benchmark covers 9641 data points, with evaluations conducted on six closed-source and ten open-source LLMs.\\n\\n**Strengths**\\n\\nML-Bench is a valuable tool for measuring LLMs\\u2019 comprehension and application of popular machine learning code repositories, potentially accelerating research across various scientific domains. The study encompasses extensive experiments using six closed-source LLMs, ten open-source LLMs, and four agent frameworks, emphasizing the significance of choosing appropriate agent frameworks and base LLMs.\\n\\n**Weaknesses**\", \"w1\": \"The authors opted to fine-tune LLama-2-7b, DeekseekCoder-6.7b, and CodeLlama-7b, despite these models delivering the poorest performance in the vanilla prompting setting. Providing a rationale for selecting these low-performance LLMs for fine-tuning experiments would enhance the study.\\n\\n**Comments, Suggestions, and Typos**\\n\\nC1. For the ML-Bench-L setting, it would be helpful to clarify whether LLMs utilize execution feedback, e.g., wrong arguments for bash execution, to improve in subsequent trials.\\n\\nC2. An error analysis of Llama-3.1-405B could provide valuable insights, as it underperforms compared to other models, as shown in Table 1. \\n\\nC3. In line 91, the authors reference \\u201cfour distinct evaluation settings,\\u201d but only discuss two settings, ML-Bench-L and ML-Bench-A, in the main paper. \\n\\nC4. The text in Figure 2 is difficult to read; increasing the font size could improve readability.\\n\\nC5. There is inconsistency in Table 3, showing both \\u201cDeepseekCoder-6.7b\\u201d and \\u201cDeepSeek-Coder-6.7b\\u201d with different experimental results. Clarification is needed.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Good benchmark but low relativeness to scientific topic\", \"review\": [\"This paper proposes ML-Bench for benchmarking code generation, which consists of ML-Bench-L for evaluating LLM's text-to-code conversion within a deployed environment and ML-Bench-A for testing an end-to-end code execution. The authors collect 18 ML Github repositories, benchmark GPT-4o, GPT-4, GPT-3.5, and the Claude model family, and evaluate on three settings (Oracle, Code, Retrieval) with pass@1/5 scores.\", \"The results show that GPT-4 achieves the best results on both ML-Bench-L and ML-Bench-A.\", \"## Strengths\", \"The benchmark is comprehensive, containing a large number of repositories and features.\", \"The presentation clearly illustrates their contributions.\", \"The results analysis is insightful, and the conclusion is clear.\", \"The experiments about various settings, including ID and OOD, make the results robust.\", \"The open-source code is well developed.\", \"## Weaknesses\", \"The importance of this work is not significantly presented. How can this benchmark contribute to the community or scientific discovery, is not demonstrated and discussed.\", \"The evaluation metric is too simple, which may introduce some bias to the results.\"], \"rating\": \"7\", \"confidence\": \"3\"}" ] }
QEGMxgbJEV
LLM AGENTS FOR LITERATURE TO CODE CONVERSION:CASE STUDY OF HEAT EXCHANGER DESIGN
[ "Sandeep Mishra", "Vishal Sudam Jadhav", "Shirish Karande", "Venkataramana Runkana" ]
This paper introduces a framework that utilizes large language model (LLM) agents to extract and convert mathematical models from engineering literature into executable code. Autonomous or semi-autonomous conversion of literature into code facilitates downstream tasks such as hypothesis generation, verification, and benchmarking. Focusing on heat exchanger design, our approach efficiently integrates model extraction, code generation, and performance optimization, with minimal human intervention. The system's knowledge base is continuously refined with each new paper, leading to ongoing improvements. Experiments conducted on 115 research articles using the HxAgent approach demonstrate substantial improvements over the previous non-agentic baseline, HxLLM. Although the work is still in progress, the results highlight the potential of agent-driven workflows in advancing scientific discovery.
[ "LLM Agents", "Industrial Design", "Code Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=QEGMxgbJEV
https://openreview.net/forum?id=QEGMxgbJEV
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "kgtis15Jwu", "aqkKkZrIR5", "6FUvhfKDzk" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1741016583959, 1741147533955, 1741127508006 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission24/Reviewer_nCDe" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission24/Reviewer_BXYF" ] ], "structured_content_str": [ "{\"title\": \"Official Review\", \"review\": \"The manuscript presents an agentic framework designed for heat exchanger design tasks. Experiments are conducted to validate the effectiveness of the proposed approach through comparison with the HxLLM framework on a designated evaluation dataset.\\n\\nStrengths\\n- The work attempts to address an important application domain through an agent-based approach\\n- Experimental comparison with a domain-specific baseline (HxLLM) is included\\n\\nHowever, this work has several limitations. \\n1. The proposed framework largely adheres to established agentic paradigms without introducing significant innovations specific to heat exchanger design challenges. The work would benefit substantially from identifying and addressing unique challenges in heat exchanger design that require specialized agent architectures, or proposing domain-specific enhancements to the standard agentic framework that improve performance on this particular task.\\n2. The content is not self contained. For example, where is the 115 evaluation data from, how the articles are selected.\\n3. While comparison against HxLLM provides some insights, the experimental evaluation lacks breadth. There is no comparisons with general-purpose models (e.g., GPT variants, Claude) to establish relative performance gains. The ablation studies are absent to validate the contribution of individual components. There is limited analysis of why the proposed approach succeeds or fails in specific scenarios\\n4. There is no analysis of computational efficiency or latency, which is particularly important for multi-agent systems. The discussion on the trade-offs between performance and computational resource requirements is missing.\", \"typos\": [\"Line 93, the figure reference is missing.\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Novel Method with Significant Impact\", \"review\": [\"This paper presents the HxAgent framework, a novel multi-agent system that leverages LLMs to extract mathematical models from engineering literature and automatically convert them into executable code. Focusing on the design and optimization of heat exchangers, the framework utilizes a series of specialized agents (e.g., Summary Creator, TF-IDF, Planner, Designer, Optimization, Code Refiner, and Error Correction agents) to generate, refine, and validate code with minimal human intervention. The system is evaluated against a non-agentic baseline (HxLLM) across six criteria on 115 research articles, demonstrating notable improvements in performance.\", \"The paper exhibits high technical quality and clarity by employing a robust and detailed methodology that includes well-structured pseudo-code, comprehensive mathematical formulations, and clear process diagrams. This work shows significance in automating the extraction and translation of complex mathematical models into optimized code, which would benefit the field of research.\", \"### Pros:\", \"Innovative methodology that integrates multiple specialized LLM agents for targeted tasks and introduces self-reflection and RAG-based error correction.\", \"This paper is well-structured and the components are well-explained and demonstrated.\", \"Promises significant automation in literature-to-code conversion.\", \"### Cons\", \"Please check the correctness of quotation marks.\", \"The multi-agent design might be challenging to replicate or implement in different contexts.\", \"Reliance on high-quality input prompts and external datasets could affect performance.\"], \"rating\": \"6\", \"confidence\": \"3\"}" ] }
KNQe3Cmupn
MDCROW: AUTOMATING MOLECULAR DYNAMICS WORKFLOWS WITH LARGE LANGUAGE MODELS
[ "Sam Cox", "Quintina L. Campbell", "Jorge Medina", "Brittany Watterson", "Andrew White" ]
Molecular dynamics (MD) simulations are essential for understanding biomolecular systems but remain challenging to automate. Recent advances in large language models (LLM) have demonstrated success in automating complex scientific tasks using LLM-based agents. In this paper, we introduce MDCrow, an agentic LLM assistant capable of automating MD workflows. MDCrow uses chain-of-thought reasoning over 40 expert-designed tools for handling and processing files, setting up simulations, analyzing the simulation outputs, and retrieving relevant information from literature and databases. We assess MDCrow's performance across 25 tasks of varying complexity, and we evaluate the agent's robustness to both task complexity and prompt style. GPT-4o is able to complete complex tasks with low variance, followed closely by Llama3-405b, a compelling open-source model. While prompt style does not influence the best models' performance, it may improve performance on smaller models.
[ "agent", "agentic AI", "computational biology", "molecular dynamics", "large language models" ]
Accept (Poster)
https://openreview.net/pdf?id=KNQe3Cmupn
https://openreview.net/forum?id=KNQe3Cmupn
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "kiWDCNPdOT", "cIuMeIflRK", "DtmM8q3oxY" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1741018570029, 1741148022952, 1741147865083 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission25/Reviewer_ew4q" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission25/Reviewer_prxo" ] ], "structured_content_str": [ "{\"title\": \"Official Review\", \"review\": \"This work presents a agentic framework for automating molecular dynamics workflows.\", \"strength\": [\"This paper is well presented and generally well-written.\", \"The MDCrow framework's effectiveness is thoroughly validated through comprehensive experiments across a diverse spectrum of task complexities.\", \"The evaluation methodology is robust, featuring comparative analyses against multiple state-of-the-art large language models, providing meaningful context for the framework's performance gains.\"], \"rating\": \"7\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Official Review\", \"review\": \"The paper introduces MDCrow, an AI-driven Large Language Model (LLM) agent designed to automate molecular dynamics (MD) workflows. Traditional MD simulations require complex setup, parameter tuning, and extensive manual intervention, making them difficult to streamline. MDCrow leverages chain-of-thought reasoning and integrates with 40 expert-designed tools for protein structure preparation, simulation execution, result analysis, and literature retrieval. The system is evaluated across 25 tasks of varying complexity, demonstrating that GPT-4o and Llama3-405b achieve the highest task completion rates. MDCrow outperforms single-query LLMs and ReAct-style agents, showing superior accuracy and robustness in automating MD workflows.\", \"weaknesses\": \"1. The citations in the paper are not proper, and they do not follow the ICLR template.\\n2. Computationally intensive, requiring high-performance LLMs for optimal performance.\\n3. Limited adaptability to novel simulations, as they rely on pre-defined toolsets and workflows.\", \"rating\": \"4\", \"confidence\": \"4\"}" ] }
JDXB6nvH9x
Performant LLM Agentic Framework for Conversational AI
[ "Alex Casella", "Wayne Wang" ]
The rise of Agentic applications and automation in the Voice AI industry has led to an increased reliance on Large Language Models (LLMs) to navigate graph-based logic workflows composed of nodes and edges. However, existing methods face challenges such as alignment errors in complex workflows and hallucinations caused by excessive context size. To address these limitations, we introduce the Performant Agentic Framework (PAF), a novel system that assists LLMs in selecting appropriate nodes and executing actions in order when traversing complex graphs. PAF combines LLM-based reasoning with a mathematically grounded vector scoring mechanism, achieving both higher accuracy and reduced latency. Our approach dynamically balances strict adherence to predefined paths with flexible node jumps to handle various user inputs efficiently. Experiments demonstrate that PAF significantly outperforms baseline methods, paving the way for scalable, real-time Conversational AI systems in complex business environments.
[ "agentic", "agentic ai", "conversational ai", "machine learning", "workflow navigation", "ai automation", "agent", "performant", "latency" ]
Reject
https://openreview.net/pdf?id=JDXB6nvH9x
https://openreview.net/forum?id=JDXB6nvH9x
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "hwBbzAu06t", "WESp0jw6WV", "CqWXaSS6Pg", "1QlMgbbYWf" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1741144717108, 1741147233060, 1741046997102, 1741031244013 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission2/Reviewer_eaKp" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission2/Reviewer_aynr" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission2/Reviewer_GMGB" ] ], "structured_content_str": [ "{\"title\": \"Review for Performant LLM\", \"review\": \"This paper introduces the Performant Agentic Framework (PAF), a system designed to help Large Language Models (LLMs) navigate complex graph-based workflows in Conversational AI applications. Many existing AI-driven voice assistants struggle with accuracy and efficiency when following structured workflows, leading to errors in decision-making and slow responses. PAF improves this by using vector-based node selection combined with step-by-step logic trees, allowing AI to choose the right actions quickly and accurately while reducing the need for extra planning steps. Experimental results show that PAF outperforms existing methods in both accuracy and speed, making it a promising solution for scalable, real-time AI conversations in industries like customer service, healthcare, and tech support.\", \"strengths_of_the_paper\": \"1. Improves AI decision-making in complex workflows, reducing errors in automated conversations.\\n2. Uses structured logic trees and vector scoring, leading to better accuracy in AI responses.\", \"weaknesses_of_the_paper\": \"1. Limited to graph-based workflows, making it less useful for unstructured conversations.\\n2. Depends on predefined paths, which may reduce AI flexibility in handling unexpected scenarios.\\n3. Requires specialized setup and tuning, making it harder to integrate into new AI systems.\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review for Performant LLM Agentic Framework for Conversational AI\", \"review\": \"Summary:\\nThis paper proposes the Performant Agentic Framework (PAF) to improve LLM-driven Conversational AI in graph-based workflows. Existing methods struggle with alignment errors, hallucinations, and high latency, limiting real-world deployment. PAF addresses these issues by combining LLM-based reasoning with a vector-scoring mechanism for efficient node selection. It includes Basic PAF, which follows step-by-step logic trees, and Optimized PAF, which reduces context size using vector-based node selection. Experiments show that PAF significantly improves accuracy and response time, making it a scalable solution for real-time business applications.\", \"pros\": \"1.\\tThe paper addresses specific challenges in navigating graph-based workflows with LLMs (high latency, alignment errors, hallucinations) that are genuinely problematic in production environments.\\n2.\\tThe PAF framework combines LLM reasoning with vector scoring mechanisms, reducing latency while maintaining accuracy. This hybrid approach effectively addresses the efficiency issues of relying solely on LLM planning.\", \"cons\": \"1.\\tWhile the paper mentions frameworks like LangChain and LangGraph, it doesn't directly compare PAF against these established solutions, opting instead for custom baseline methods.\\n2.\\tThe experiments are primarily based on synthetic datasets, lacking evaluation in real production environments, which limits the generalizability of the research conclusions.\\n3.\\tThe paper lacks detailed analysis of computational resource requirements (memory, CPU/GPU demands) and doesn't explore how the framework's performance scales with workflows of different sizes.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"This paper introduces the Performant Agentic Framework (PAF), a novel approach designed to enhance the performance of Large Language Models in managing complex, node-based conversational workflows.\", \"review\": \"### **summary of strength**:\\n1) **Motivation**: The motivation behind PAF is clear and pragmatically grounded in real-world application needs, particularly within environments requiring dynamic conversational AI capabilities\\n\\n2) **New framework**: The paper introduced new framework for node selection. Result show improvement in accuracy of the task.\\n### **summary of weakness**:\\n1) **Lack of comparison to other baselines**: While the paper acknowledges several existing frameworks such as LangChain and MetaGPT in the related work section, it notably lacks direct comparisons with these approaches, limiting the evaluation to a simplistic \\\"naive single-shot\\\" baseline. This restricts the ability to assess PAF's true advancements over potentially more sophisticated and similarly aimed approaches currently in use.\\n\\n2) **Lack of description for experimental setting**: The experimental section of the paper fails to detail the specific settings used, including the backbone models, datasets, and configuration parameters.\\n\\n3) **Lack of analysis on vector-based scoring**: The vector-based scoring mechanism is a core component of Optimized PAF, yet there is no ablation study isolating its contribution. It is unclear how much this mechanism enhances overall system performance.\\n\\n4) **Contribution to the literature**: The paper claims that the framework can effectively address shortcomings in the literature, such as \\\"context drift\\\" and \\\"alignment errors,\\\" yet it does not provide empirical evidence or specific experiments to substantiate these claims.\\n\\n5) **typos in table 1**: The header of table 1, column2 is not clear.\\n\\n### **Questions**\\n\\n1) Can you provide an experimental comparison against state-of-the-art agentic frameworks such as LangChain or MetaGPT? This would give a clearer picture of PAF\\u2019s advantages over existing solutions. How does PAF perform relative to these systems in terms of accuracy, efficiency, and scalability?\\n\\n2) The contribution of the vector-based scoring mechanism to the overall performance improvement remains unclear. Could you conduct an ablation study where this component is removed or replaced with alternative selection techniques? \\n\\n3) Given the role of the threshold in the vector-based node selection process, how was this threshold determined? Was it chosen heuristically, or was it tuned through systematic experimentation?\", \"rating\": \"3\", \"confidence\": \"4\"}" ] }
H2x9juCuJg
Evolving RL: Discovering New Activation Functions using LLMs
[ "Kalyan Varma Nadimpalli", "Shashank Reddy Chirra", "Pradeep Varakantham", "Stefan Bauer" ]
Deep Reinforcement Learning (DRL) has traditionally inherited activation functions from supervised learning, despite fundamental differences in learning dynamics and objectives. We present EvolveAct, a novel framework that leverages large language models and evolutionary search to automatically discover optimal activation functions for specific RL tasks. Our method combines genetic programming with code Large Language Models (LLMs) to explore a rich space of mathematical functions, optimizing for stability and performance in DRL training. Experimental results across multiple environments show that the discovered activation functions consistently outperform standard choices such as ReLU and TanH, improving final performance on the Minatar suite by 37.25% and 28.3% on the Brax suite on average. By jointly optimizing over multiple diverse environments, we discover activation functions that demonstrate strong generalization capabilities across different RL domains. This research provides a foundation for automating fundamental architectural choices in deep reinforcement learning systems.
[ "Reinforcement Learning", "Evolutionary Search", "Large Language Models", "LLM Hypothesis Generation" ]
Accept (Poster)
https://openreview.net/pdf?id=H2x9juCuJg
https://openreview.net/forum?id=H2x9juCuJg
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "vjedbQzXyw", "poX00bf2iL", "ErWo3UVRMO" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741148146153, 1741044312298, 1741146166016 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission29/Reviewer_sJQz" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission29/Reviewer_CSv6" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of Evolving RL: Discovering New Activation Functions using LLMs\", \"review\": \"Pros:\\n1. This paper is well-written. The motivation is clear, and the paper is very easy to follow.\\n2. The experiments are very promising. \\n3. The activation functions this paper finds are relatively simple, not too complicated. They seem to be practical in real implementation.\", \"cons\": \"1. The experiments are only conducted on PPO with fixed hyperparameters but have not been applied to other RL algorithms. It would be great to conduct more experiments in broader settings.\\n2. The evolutionary process likely requires significant computational resources to evaluate multiple activation functions across environments and random seeds, though specific requirements aren't fully detailed.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Good Paper\", \"review\": \"This paper introduces EvolveAct, a framework that automatically discovers new activation functions for Deep Reinforcement Learning (DRL) using large language models (LLMs) and evolutionary search. Traditional DRL models use standard activation functions like ReLU and Tanh, which were originally designed for supervised learning, despite fundamental differences in learning dynamics. EvolveAct initializes with basic activation functions and evolves new ones through a combination of genetic programming and LLM-guided crossover operations. The paper suggests that custom activation functions tailored to specific RL tasks can significantly enhance performance, opening the door for automated architectural optimizations in deep learning.\", \"strengths_of_the_paper\": \"1. Automates the discovery of activation functions, reducing the reliance on manual design.\\n2. Leverages LLMs for intelligent function generation, improving diversity in search.\\n3. Demonstrates significant performance gains over traditional activation functions in RL.\", \"weaknesses_of_the_paper\": \"1. Dependent on LLM-generated functions, which may introduce bias or suboptimal solutions.\\n2. Computationally expensive, multiple RL training runs are required to evaluate each activation function.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
EVqlVjvlt8
Large Language Models powered Neural Solvers for Generalized Vehicle Routing Problems
[ "Cong Dao Tran", "Quan Nguyen-Tri", "Huynh Thi Thanh Binh", "Hoang Thanh-Tung" ]
Neural Combinatorial Optimization (NCO) has shown promise in solving combinatorial optimization problems end-to-end with minimal expert-driven algorithm design. However, existing constructive NCO methods for Vehicle Routing Problems (VRPs) often rely on attention-based node selection mechanisms that struggle with large-scale instances. To address this, we propose a directed fine-tuning approach for NCO based on LLM-driven automatic heuristic design. We first introduce an evolution-driven process that extracts implicit structural features from input instances, forming LLM-guided attention bias. This bias is then integrated into the neural model’s attention scores, enhancing solution flexibility and scalability. Instead of retraining from scratch, we fine-tune the model on a small, diverse dataset to transfer learned heuristics effectively to larger problem instances. Experimental results show that our approach achieves state-of-the-art performance on TSP and CVRP, significantly improving generalization to both synthetic and real-world datasets (TSPLIB and CVRPLIB) with thousands of nodes.
[ "Large Language Models", "Neural Combinatorial Optimization", "Vehicle Routing Problems" ]
Accept (Oral)
https://openreview.net/pdf?id=EVqlVjvlt8
https://openreview.net/forum?id=EVqlVjvlt8
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "rtBTf7pvPs", "lPLrQLfztV", "dHycCknsfu", "Wdyvuiohj0" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741148314466, 1741146686688, 1741026807371, 1741067749221 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission32/Reviewer_NyHE" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission32/Reviewer_K7YX" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission32/Reviewer_zz3F" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Good Paper\", \"review\": \"This paper presents a Large Language Model (LLM)-powered neural solver for generalized Vehicle Routing Problems (VRPs), integrating LLM-generated heuristics with Neural Combinatorial Optimization (NCO). Traditional constructive NCO methods struggle with large-scale VRPs due to their reliance on attention-based node selection, which does not scale well. The proposed method fine-tunes existing neural solvers by introducing LLM-guided attention bias, derived through an evolutionary process that extracts structural features from VRP instances. This attention bias enhances the model\\u2019s flexibility and generalization without modifying its architecture. Experimental results on Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) show that the approach outperforms state-of-the-art solvers (e.g., POMO, LEHD, ELG) in both synthetic and real-world datasets (TSPLIB, CVRPLIB), achieving superior scalability and efficiency for large-scale combinatorial optimization.\", \"suggested_improvements\": \"1. The paper does not provide an in-depth analysis of how LLM-generated heuristics influence decision-making.\\n2. The study mainly compares against other NCO-based solvers, but real-world logistics often rely on classical heuristics (e.g., LKH, HGS, OR-Tools).\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Review\", \"review\": \"This paper introduces a novel LLM-guided attention bias mechanism for model fine-tuning, aiming to enhance the generalization capabilities of neural combinatorial optimization (NCO) models.\\n\\nExtensive experimental results demonstrate that the proposed approach achieves state-of-the-art performance on the Traveling Salesman Problem (TSP) and the Capacitated Vehicle Routing Problem (CVRP) across various problem scales. Additionally, the method exhibits strong generalization capabilities, effectively solving real-world TSPLib and CVRPLib instances.\", \"areas_for_improvement\": [\"It would be better if the paper could analyze how the number of iterations (G) and the population size (N) impact the quality of the attention bias.\", \"It would be better if the paper could provide qualitative examples of the generated heuristic and analyze why these LLM generated heuristics provide better attention scores compared to other expert-designed heuristics.\", \"The paper should use parenthetical citations when references are not used as nouns.\"], \"rating\": \"9\", \"confidence\": \"4\"}", "{\"title\": \"An effective fine-tuning method for vehicle routing problems\", \"review\": \"This paper addresses the challenge of scaling Neural Combinatorial Optimization (NCO) for large Vehicle Routing Problem (VRP). It proposes an LLM-driven fine-tuning approach that integrates automatically generated attention bias into pre-trained neural solvers. By leveraging LLMs to design heuristics and refining models on diverse instance sizes, the method enhances generalization without retraining from scratch. Experiments on TSP and CVRP show state-of-the-art performance of the proposed method.\", \"strengths\": \"1. This paper proposes a \\\"plug-in\\\" method which can be easily combined with existing machine learning models for VRP. Therefore, the method may provide further improved performance suppose there are better foundation models for VRP, hence the impact of this paper is high.\\n\\n2. The proposed method is a fine-tuning method in nature. It requires less time to implement than training a model from scratch when the data changes. Besides, adding bias introduces negligible computational cost to the initial foundation model to which the method is applied. Therefore, the proposed method has a high time efficiency.\\n\\n3. This paper conducts well-designed experiments to verify the effectiveness of its proposed method. Results show that the proposed method achieves superior performance.\", \"weaknesses\": \"1. This paper relies on EoH[1] method to generate the attention bias. The difference from naive application of EoH is that this paper uses LLMs to initialize the heuristics. It may be better if analysis is provided on the benefit of such initialization vs for example random initialization or other possible initializations. \\n\\n[1] Liu, Fei, et al. \\\"Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model.\\\" International Conference on Machine Learning. PMLR, 2024.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
DFCT9dHkko
Neural Nonmyopic Bayesian Optimization in Dynamic Cost Settings
[ "Sang T. Truong", "Duc Quang Nguyen", "Willie Neiswanger", "Ryan-Rhys Griffiths", "Stefano Ermon", "Nick Haber", "Sanmi Koyejo" ]
Bayesian optimization (BO) is a popular framework for optimizing black-box functions, leveraging probabilistic models such as Gaussian processes. Conventional BO algorithms, however, assume static query costs, which limit their applicability to real-world problems with dynamic cost structures such as geological surveys or biological sequence design, where query costs vary based on the previous actions. We propose a novel nonmyopic BO algorithm named LookaHES featuring dynamic cost models to address this. LookaHES employs a neural network policy for variational optimization over multi-step lookahead horizons to enable planning under dynamic cost environments. Empirically, we benchmark LookaHES on synthetic functions exhibiting varied dynamic cost structures. We subsequently apply LookaHES to a real-world application in protein sequence design using a large language model policy, demonstrating its scalability and effectiveness in handling multi-step planning in a large and complex query space. LookaHES consistently outperforms its myopic counterparts in synthetic and real-world settings, significantly improving efficiency and solution quality. Our implementation is available at https://github.com/sangttruong/nonmyopia.
[ "nonmyopic Bayesian optimization", "dynamic cost settings", "neural network policies", "gaussian process", "large language models" ]
Accept (Oral)
https://openreview.net/pdf?id=DFCT9dHkko
https://openreview.net/forum?id=DFCT9dHkko
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "sjojRAIGnE", "dZSkp7SFc0", "YsS1AKhJ0U", "JDIzsu4GHT" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1741034450308, 1741041296147, 1741148281064, 1741146890213 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission33/Reviewer_tiGq" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission33/Reviewer_uG3H" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission33/Reviewer_oKCy" ] ], "structured_content_str": [ "{\"title\": \"Meaningful problem setting with solution of limited innovation\", \"review\": \"This paper proposes a nonmyopic Bayesian optimization method for black-box optimization with action-dependent evaluation costs. It optimizes recurrent neural networks for sampling and uses pathwise sampling to reduce complexity. Experiments on synthetic functions and protein design validate its effectiveness and practicality.\", \"strengths\": \"1. This paper proposes a model to address the problem of black box optimization problem with dynamic cost. Some practical examples of this problem are provided to support the significance and impact of the problem setting.\\n\\n2. This paper conducts experiments on both synthetic continuous data and real-world discrete data (protein optimization), to show the effectiveness of the proposed model.\", \"weaknesses\": \"1. The contribution of this paper in the aspect of machine learning innovation is limited. This paper addresses the key point of its research question, i.e., the dynamic cost, by considering the history moves as a sequence and using RNN to capture the trend. Besides this, so far as I can see, this paper does not provide innovative techniques that consider the mapping between the moves and costs.\\n\\n2. This paper does not provide convincing enough empirical analysis. The baselines chosen are mostly before 2010, while there are other available state-of-the-art models that are related with this paper, such as the following one (also cited by this paper):\\n\\nLee, Eric Hans, et al. \\\"A nonmyopic approach to cost-constrained Bayesian optimization.\\\" Uncertainty in Artificial Intelligence. PMLR, 2021.\\n\\nThe above work is highly related to this paper but is not compared through experiments.\", \"rating\": \"5\", \"confidence\": \"4\"}", "{\"title\": \"review\", \"review\": \"This paper introduces LookaHES, a novel nonmyopic Bayesian optimization algorithm designed for dynamic cost settings where query costs depend on previous actions. LookaHES integrates a neural network policy to achieve scalability in planning multiple steps ahead. The authors benchmark LookaHES against several baseline methods on nine synthetic functions with various dimensions and noise levels, NASA satellite images, and on a real-world protein sequence design problem. The results demonstrate superior performance over myopic baselines and scalability to 20-step lookahead horizons.\", \"strengths\": \"1.\\tThe paper addresses an important problem of nonmyopic Bayesian optimization with dynamic costs, addressing a gap in existing literature that primarily focuses on static cost structures.\\n\\n2.\\tAs a solution to the research question, the authors propose LookaHES. By using a neural network policy for variational optimization, LookaHES can handle lookahead horizons of up to 20 steps, significantly beyond state-of-the-art nonmyopic approaches.\\n\\n3.\\tEmpirical evaluations across noise levels, cost structures, and dimensions demonstrate the efficiency and solution quality of proposed LookaHES method. The paper also contains results for the protein sequence design, showing how the method can be applied to real-world problems.\", \"concerns\": \"1. Although this paper compares against several baseline methods, all except MSL are myopic methods, if I understand correctly. Conducting additional comparisons with state-of-the-art nonmyopic methods would strengthen the evaluation.\\n\\n2. LookaHES focuses on a known dynamic cost structure, which may not always be feasible. The paper does not address scenarios where costs are uncertain with high variability.\\n\\n3. The performance of LookaHES relies on a well-specified surrogate model, i.e., the surrogate model should approximate the target function effectively. As shown in Figure 12, using Bayesian linear regression resulted in a wrong approximation of the ground truth functions. I am curious: how does model misspecification influence planning quality, and would it be a limitation for applying the method to complex real-world decision-making scenarios?\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Good Paper\", \"review\": \"The paper introduces LookaHES, a neural nonmyopic Bayesian optimization algorithm designed to optimize black-box functions in dynamic cost environments. Traditional Bayesian optimization (BO) methods assume static query costs, which limits their applicability to real-world problems where costs change based on prior queries, such as geological surveys and biological sequence design. LookaHES addresses this by incorporating a neural network policy that enables multi-step lookahead while considering dynamic costs, making strategic investments in queries that may initially seem suboptimal but unlock better solutions over time. The method is evaluated on synthetic functions, NASA satellite imagery, and protein sequence design, demonstrating superior efficiency and solution quality compared to traditional myopic approaches. LookaHES outperforms existing BO algorithms by efficiently handling large, complex search spaces and optimizing queries over longer planning horizons.\", \"suggested_improvements\": \"1. The authors should explain how the neural network learns to balance cost and reward in different environments.\\n2. Test LookaHES on real-world engineering or logistics problems, such as robotic path planning or drug discovery.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
90JhYTlGSI
Sara: Screening Agents for Rheumatoid Arthritis
[ "Umakanta Maharana", "Sarthak Verma", "Avarna Agarwal", "Prakashini Mruthyunjaya", "Sakir Ahmed", "Murari Mandal" ]
Early diagnosis of Rheumatoid Arthritis (RA) remains a critical challenge in healthcare due to its nonspecific early symptoms and reliance on prolonged clinical evaluations, which can delay treatment and worsen patient outcomes. Although Large Language Models (LLMs) show promise in medical applications, their adaptation for specialized diagnostic tasks requires tailored knowledge integration and interpretability---a gap in current AI-driven solutions. In this work, we propose an LLM-based agentic framework SARA, for early screening and diagnosis of RA across diverse clinical stages. We introduce PreRAID (Prescreening Rheumatoid Arthritis Information Database), a real-world dataset comprising data from 160 patients. SARA employs a multi-stage reasoning approach that combines pattern recognition with clinical heuristics to analyze patient symptoms, medical history, and laboratory findings. The PreRAID dataset serves as a contextual knowledge base. The system not only identifies potential RA cases but also generates human-readable explanations for its conclusions, aligning with clinical demands for transparency and accountability in AI-assisted diagnosis. Through rigorous validation on both synthetic and retrospective patient datasets, our framework achieved diagnostic accuracies of up to 95\% and generated explanations deemed actionable in 92\% of cases by both rheumatologists and medical interns. Furthermore, several cross-validation results demonstrate robust performance across diverse patient demographics and clinical presentations, suggesting its potential for widespread implementation. This work demonstrates the viability of LLM agents as scalable, explainable tools for complex diagnostic tasks, especially in resource-constrained healthcare settings where specialist access may be limited.
[ "LLM Agents", "Rheumatoid Arthritis (RA)" ]
Reject
https://openreview.net/pdf?id=90JhYTlGSI
https://openreview.net/forum?id=90JhYTlGSI
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "pRAH3Gzbde", "jaTa1qbIWD", "W2JJvk4OrW", "6Qc6b0SzVJ" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1741038623882, 1741020371432, 1741109578468, 1741142942651 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission7/Reviewer_DmNK" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission7/Reviewer_eUKS" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission7/Reviewer_Artm" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Sara: Screening Agents for Rheumatoid Arthritis\", \"review\": \"# 1. Summary\\n\\nIn this paper, early Rheumatoid Arthritis (RA) diagnosis is addressed by proposing SARA, which employs large language models (LLMs) divided into three agent roles, Solo, Duo, and Trio, to simulate the clinical reasoning process. The system diagnoses RA and provides human\\u2010readable explanations using a new dataset, PreRAID, developed from 160 patients. \\n \\n 1. **Novel Approach:** The multi-agent design shows a significant improvement over existing attempts to deal with the clinical decision-making issues. The framework decomposes the diagnostic process into specialized roles, such that the framework acts more like a transparent, easy-to-interpret collaboration among experts than a single, opaque system.\\n\\n\\n\\n2. **Data Contribution:** \\nit is well-annotated and can be used for further research.\\n\\n 3. **Robust Evaluation:**\\n In this paper, experiments are performed using multiple metrics. This robust evaluation supports the approach's validity and the importance of thoughtful, prompt engineering to boost performance.\\n\\n4. **Explainability:** \\n A significant advantage is the system\\u2019s ability to generate explanations that align with clinical reasoning.\\n\\n# 3. Weaknesses\\n\\n 1. **Limited Dataset Size** \\n The study may not have fully captured the diversity of populations because of the limited 160 records in the dataset, which raise a concern about its generlization. \\n\\n2. **Scalability and Overhead** \\n Despite its high performance, the Duo configuration introduces additional computational complexity, which may be challenging for real-world clinical implementation.\", \"rating\": \"8\", \"confidence\": \"4\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes an LLM-based agentic framework, SARA, designed for the early screening and diagnosis of Rheumatoid Arthritis (RA) across various clinical stages. Additionally, it introduces a new dataset resource, PreRAID (Prescreening Rheumatoid Arthritis Information Database), which comprises real-world data from 160 patients. The study includes extensive cross-validation experiments to evaluate the framework\\u2019s effectiveness.\", \"weakness\": [\"The gender and RA diagnosis distribution is not consistent between Table 2 and L207-208\", \"The paper lacks details on how patient data is transformed into vector embeddings. What pretrained model was used for encoding? What is the dimensionality of the vector embeddings? If it is one, how does the method aggregate the embeddings over all tokens?\", \"Table 3 is meant for comparing different agentic configurations. However, the choice of models appears inconsistent. Specifically, the Duo agent uses GPT-4o min, while other configurations use GPT-4o.\", \"The paper does not adequately discuss why the Trio agent underperforms compared to the Duo agent.\", \"As shown in Figure 5, a single LLM without a knowledge base can sometimes outperform both the Solo and Trio agents. The authors should incorporate a detailed discussion and analysis to explain these findings.\", \"The paper does not provide any results or analysis regarding the quality and effectiveness of the generated human-readable explanations. Since explainability is also one of the contribution of the paper, this aspect should be discussed.\", \"The paper should use parenthetical citations when references are not used as nouns.\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Sara: Screening Agents for Rheumatoid Arthritis\", \"review\": \"The paper introduces SARA, an LLM-based multi-agent framework for early Rheumatoid Arthritis (RA) diagnosis, achieving high accuracy (95%) and producing clinician-approved explanations (92%) using the PreRAID dataset (160 patients). While the paper is well-written, there are several key concerns. First, the dataset is small and geographically constrained, limiting generalizability. Second, the Duo/Trio agent configurations introduce significant computational overhead, raising concerns about scalability. Additionally, the fixed knowledge base restricts adaptability to new medical insights, and the methodology for obtaining contextual embeddings lacks clarity\\u2014specifically, which pre-trained embedding model was used (Lines 246/247)? From a framework perspective, the approach lacks novelty, as it primarily builds on existing multi-agent paradigms without introducing fundamental improvements. Additionally, in Table 3, why was the GPT-4o Mini result reported for the Duo agent configuration, whereas the GPT-4o result has been reported for the other settings? Furthermore, Section 5.1 lacks in-depth analysis, primarily summarizing results from tables and plots rather than offering insights into why specific agent configurations outperform others or how ablations impact performance. A more thorough discussion of the reasoning behind model superiority, failure cases, and comparative analysis is needed to enhance the scientific depth and clarity of the experimental results.\", \"rating\": \"5\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}" ] }
5XQlbNIhAW
ProteinHypothesis: A Physics-Aware Chain of Multi-Agent RAG LLM for Hypothesis Generation in Protein Science
[ "Adib Bazgir", "Rama chandra Praneeth Madugula", "Yuwen Zhang" ]
Scientific hypothesis generation is fundamental to advancing molecular biology and protein science. This study presents a novel AI-driven multi-agent framework that integrates Retrieval-Augmented Generation (RAG) with structured experimental data for automated hypothesis generation and validation. The methodology employs scientific literature retrieval, structured dataset analysis, and multi-agent evaluation, ensuring that generated hypotheses are scientifically rigorous and experimentally testable. The framework consists of three key phases: (1) Hypothesis Generation, where insights from literature and structured data are synthesized using large language models; (2) Multi-Agent Evaluation through Chain of Thoughts (CoT) mechanism, where hypotheses are assessed for internal consistency, feasibility analysis, novelty assessment, scientific impact, and scalability/generalizability; and (3) Final Selection and Validation, where high-scoring hypotheses undergo refinement using protein-specialized agents and are linked to experimental validation strategies such as molecular dynamics simulations, site-directed mutagenesis, and structural characterization. Results demonstrate the system’s ability to generate novel, high-impact hypotheses in protein stability, enzyme catalysis, ligand interactions, and biomolecular interactions, with broad applications in drug discovery, synthetic biology, and protein engineering. The study highlights the potential of AI-driven hypothesis generation in accelerating scientific discovery by integrating machine learning, structured data analysis, and multi-agent validation into research workflows. Our code is available at https://github.com/adibgpt/ProteinHypothesis.
[ "Hypothesis Generation", "Multi-Agent LLM", "Retrieval-augmented Generation (RAG)", "Protein Science" ]
Accept (Poster)
https://openreview.net/pdf?id=5XQlbNIhAW
https://openreview.net/forum?id=5XQlbNIhAW
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "nsqBDiMGpv", "k7iFhuC9BJ", "RnUIfJi9qF", "0I08ArnTDS" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1741041292859, 1740956922078, 1741143500294, 1740878543429 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission19/Reviewer_vVJm" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission19/Reviewer_LDTB" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission19/Reviewer_obSo" ] ], "structured_content_str": [ "{\"title\": \"ProteinHypothesis: A Physics-Aware Chain of Multi-Agent RAG LLM for Hypothesis Generation in Protein Science\", \"review\": \"Paper Summary:\\n\\nThe paper hypothesizes that integrating RAG-based literature analysis in a multi-agent evaluation framework can effectively generate scientifically rigorous and experimentally testable hypotheses in protein science. The authors propose that a three-phase approach using specialized AI agents can refine hypotheses through systematic validation, ensuring both scientific merit and practical applicability.\", \"strengths\": \"- Domain Specialization: This domain specialization likely improves the relevance and applicability of the generated hypotheses compared \\n to more general-purpose hypothesis generation systems.\\n\\n- Multi-Phase Evaluation Process: The three-phase approach with progressive refinement through different specialized agents provides a \\n robust framework for ensuring hypotheses\", \"weaknesses\": [\"Unclear Technical Implementation Details: The paper omits crucial information about the LLMs used, their parameters, and fine-tuning approaches, making reproduction difficult and obscuring the computational basis of the system's performance.\", \"Limited Discussion of Hallucination Risks: While the paper uses RAG to retrieve scientific content, it lacks specific verification mechanisms against hallucinations. For example, how does the RAG system handle the structured information? Despite multi-agent evaluation, the system needs better fact-checking and confidence scoring when extrapolating beyond retrieved information.\", \"Handling Novel Scientific Domains: How does your system perform in emerging protein science subfields with sparse literature or when studying protein families with limited data? What mechanisms address knowledge gaps in these scenarios?\"], \"questions_to_authors\": \"See the above\", \"rating\": \"5\", \"confidence\": \"3\"}", "{\"title\": \"Review of ProteinHypothesis\", \"review\": \"Summary:\\nThis study presents an AI-driven multi-agent framework for automated hypothesis generation in molecular biology. Using RAG and structured data, it synthesizes insights, evaluates feasibility via CoT, and validates hypotheses. Results show its ability to generate novel hypotheses in protein stability, enzyme catalysis, and biomolecular interactions, demonstrating AI\\u2019s potential in accelerating discovery for drug development, synthetic biology, and protein engineering.\", \"strengths\": \"1) This paper introduces a novel multi-agent framework integrating RAG with structured experimental data.\\n2) The three-phase framework has a broad applicability. It can generate novel hypotheses across various domains, such as protein stability, enzyme catalysis, and biomolecular interactions.\\n3) By automating hypothesis generation and validation, the framework has the potential to speed up research processes in biology.\", \"weaknesses\": \"1) With RAG, multi-agent LLMs, and additional CoT reasoning, this framework is computationally expensive. This may pose challenges for widespread adoption.\\n2) The authors did not compare AI-generated hypotheses with those from domain experts, making it unclear how AI performance compares to human-driven scientific discovery.\\n3) Readability: Consider breaking up some long paragraphs, especially in the introduction section, as they make it difficult for the reader to engage with the content from the start.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of ProteinHypothesis\", \"review\": \"# Summary:\\nThis work introduces an AI-driven multi-agent framework for automated scientific hypothesis generation and validation in molecular biology and protein science. The framework integrates Retrieval-Augmented Generation (RAG) with structured experimental data and follows three key phases: (1) Hypothesis Generation, leveraging large language models to synthesize insights from literature and structured data; (2) Multi-Agent Evaluation, using a Chain of Thought (CoT) mechanism to assess hypotheses for consistency, feasibility, novelty, impact, and scalability; and (3) Final Selection and Validation, refining high-scoring hypotheses with protein-specialized agents and linking them to experimental validation strategies like molecular dynamics simulations and site-directed mutagenesis. The results demonstrate the system's ability to generate high-impact hypotheses in protein stability, enzyme catalysis, and biomolecular interactions, with applications in drug discovery, synthetic biology, and protein engineering. This study highlights AI\\u2019s potential in accelerating scientific discovery by integrating machine learning, structured data analysis, and multi-agent validation into research workflows.\\n\\n# Strengths:\\n1. The paper presents a novel multi-agent framework integrating Retrieval-Augmented Generation (RAG) with structured experimental data, enhancing automated hypothesis generation and validation in molecular biology and protein science.\\n2. The paper is easy to follow.\\n\\n# Weaknesses:\\n1. In Conclusion, while the authors claim that \\\"Results demonstrate the system's capability to generate novel, high-impact hypothese ...\\\", the paper only provides a few examples without presenting a comprehensive evaluation of the system on the entire dataset. Without an overall evaluation metric, it is difficult to assess the approach's reliability and effectiveness.\\n2. This work employs many agents, but the authors do not specify their sources, thus making it unclear how these agents are obtained, configured, or validated, which raises concerns about reproducibility and reliability.\\n\\n# Typo:\\n\\nSection 2.2.1: belwo &rarr; below\", \"rating\": \"5\", \"confidence\": \"5\"}" ] }
5XNYu4rBe4
Dynamic Knowledge Integration in Multi-Agent Systems for Content Inference
[ "Atsushi Yamamoto", "Takumi Iida", "Taito Naruki", "Akihiko Katagiri", "Yudai Koike", "Ryuta Shimogauchi", "Kota Shimomura", "Eri Onami", "Koki Inoue", "Osamu Ito" ]
Advancements in cutting-edge science and technology have resulted from the integration of multiple interdisciplinary domains beyond traditional academic boundaries. Achieving effective cross-domain knowledge-sharing and consensus-building is crucial. However, single-agent Large Language Models (LLMs) solutions often struggle to integrate the diverse and highly specialized knowledge required in these contexts. This study proposes a multi-agent system with dynamic knowledge integration, where multiple specialized LLM-based agents cooperatively infer content by referencing different domain-specific databases. Each agent selectively and dynamically updates references based on conversational context to achieve deeper insight and more robust solutions. We propose four system architectures---Decentralized, Centralized, Layered, and Shared Pool---for agent coordination. We then evaluate these approaches on a title-to-abstract inference task using a subset of the arXiv dataset, demonstrating that multi-agent systems significantly outperform single-agent models in both accuracy and stability. Notably, expert agents, restricted to domain-specific data, produce more precise and consistent outputs, and the Decentralized architecture fosters increased domain interaction. These findings suggest that the collaboration of specialized multi-agent systems can more effectively facilitate the consensus-building process in the advancement of complex interdisciplinary scientific domains.
[ "Multi agent", "LLM", "Knowledge integration", "Knowledge representation and reasoning" ]
Accept (Poster)
https://openreview.net/pdf?id=5XNYu4rBe4
https://openreview.net/forum?id=5XNYu4rBe4
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "keFwFtZ46k", "TWmRzf59Bq", "R44kLTPIXg", "FO0WalJmc1" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740760480344, 1740703650027, 1741143285998, 1741042857440 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission13/Reviewer_vChF" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission13/Reviewer_yg2E" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission13/Reviewer_1sQM" ] ], "structured_content_str": [ "{\"title\": \"Good work\", \"review\": \"This paper works on integrating diverse and highly specialized knowledge through multi-agent systems. Four system architectures are presented for agent coordination. Detailed experiments and comprehensive analysis are conducted to compare between these four structures. In general, the paper is well written and the methodologies are innovative.\", \"quality\": \"The paper is in good quality with a clear presentation to show the background and challenges in multi-agent coordination across different knowledge domains. The methodologies behind this paper are comprehensive followed by complete experiments and insightful analysis. The paper is in a good flow and the writing is formal and clear.\", \"clarity\": \"The paper clearly shows the proposed system structures that are important to agent coordination. The comprehensive and rigorous experiments show the effectiveness of the proposed coordination structures. Moreover, the analysis is also thorough and insightful.\", \"originality\": \"The methodologies proposed in this paper are novel and important for applying multi-agents in AI systems.\", \"pros\": \"1. The problem of multi-agent coordination is important and the methodologies shown in this paper are novel.\\n2. The experiments are comprehensive and rigorously prove the effectiveness of the proposed architectures. The analysis is also complete and insightful.\\n3. The paper is well-written and in a clear flow.\", \"cons\": \"1. The mechanism behind the coordination is not clearly presented and analyzed.\\n2. The datasets are limited in domains, and the knowledge gap between different agents is not presented which makes it unclear whether the improvement comes from the coordination of knowledge or an ensemble from different agents.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Good work\", \"review\": \"Summary: This paper presents a multi-agent system that dynamically integrates knowledge across domain-specific LLM agents for scientific content inference. The study examines four agent coordination architectures (Decentralized, Centralized, Layered, and Shared Pool) and evaluates them on arXiv title-to-abstract generation.\", \"strengths\": [\"Well-motivated approach that reflects real-world organizational structures\", \"Strong empirical performance that multi-agent systems outperform single-agent models\", \"Insightful comparisons across different coordination architectures\", \"Useful ablation studies demonstrating the importance of dynamic knowledge updates\"], \"weaknesses\": [\"Limited theoretical analysis explaining why certain architectures perform better\", \"Dataset construction details could be more thorough\", \"Comparisons with state-of-the-art RAG-based models would strengthen evaluation\", \"Insufficient discussion of practical applications beyond academic paper generation\"], \"rating\": \"8\", \"confidence\": \"4\"}", "{\"decision\": \"Accept (Poster)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Clear presentation but the scope is somewhat narrow\", \"review\": \"Summary\\n\\nThis paper introduces a multi-agent system that dynamically integrates domain-specific knowledge for improved cross-domain content inference. The authors evaluate four agent-coordination architectures\\u2014Decentralized, Centralized, Layered, and Shared Pool\\u2014using an arXiv-based title-to-abstract inference task. Experimental results suggest that multi-agent systems consistently outperform single-agent models in both accuracy and robustness, and the authors conclude that specialized multi-agent collaboration can more effectively facilitate consensus-building in interdisciplinary settings.\\n\\n\\n\\n\\nStrengths\\n\\n1. Well-Structured\\uff1a The paper is well organized, with a clear presentation and smooth language flow, making it easy to read.\\n\\n2. Detailed Experiments and Analysis\\uff1a Under both single-agent and multi-agent configurations, the authors conduct experiments on four different agent coordination strategies and provide comparative analyses. They also perform ablation studies on dynamic knowledge updates, as well as on the effect of varying the number of rounds and turns, thereby offering deeper insights into the effectiveness of each approach.\\n\\n\\nWeaknesses\\n\\n1. Task Specificity\\nAll experiments and evaluations focus solely on a title-to-abstract task, leaving it uncertain how well the findings generalize to other text-generation or content-inference scenarios.\\n\\n2. Lack of SOTA Comparison\\nIt is not entirely surprising that multi-agent systems outperform single-agent baselines, given that multiple agents have access to more data and consume more computational power. A comparison with other state-of-the-art (SOTA) methods would better situate the paper\\u2019s contributions within the broader research community.\\n\\n3. Limited Contribution\\nAlthough the paper compares four coordination strategies, these approaches do not introduce any major innovation for multi-agent collaboration. Moreover, the experiments rely on a single dataset and a single task, limiting the generalizability of the findings. The four strategies also exhibit inconsistent performance in both the All-domain and Expert structures, making it difficult to provide definitive guidance for other studies or real-world applications. As a result, the paper\\u2019s overall contribution appears limited.\\n\\n4. Lack of Computational Cost Analysis\\nIncluding a comparison of runtime and resource usage under different coordination strategies would strengthen the paper\\u2019s persuasiveness and practical relevance.\\n\\n5. Simplified Evaluation Metric\\nThe authors rely heavily on cosine similarity between generated segments and the ground-truth text. While this offers a coarse measure of alignment, it may not capture the more nuanced aspects.\\n\\nThe paper presents a clear exploration of multi-agent coordination for title-to-abstract generation, but its scope is somewhat narrow, and it would benefit from stronger comparisons, broader tasks, and additional analyses (e.g., computational costs and more sophisticated evaluation metrics).\", \"rating\": \"5\", \"confidence\": \"5\"}" ] }
5SqvDgWcJp
LLM-Augmented Chemical Synthesis and Design Decision Programs
[ "Haorui Wang", "Jeff Guo", "Lingkai Kong", "Rampi Ramprasad", "Philippe Schwaller", "Yuanqi Du", "Chao Zhang" ]
Retrosynthesis, the process of breaking down a target molecule into simpler precursors through a series of valid reactions, stands at the core of organic chemistry and drug development. Although recent machine learning (ML) research has advanced single-step retrosynthetic modeling and subsequent route searches, these solutions remain restricted by the extensive combinatorial space of possible pathways. Concurrently, large language models (LLMs) have exhibited remarkable chemical knowledge, hinting at their potential to tackle complex decision-making tasks in chemistry. In this work, we explore whether LLMs can successfully navigate the highly constrained, multi-step retrosynthesis planning problem. We introduce an efficient scheme for encoding reaction pathways and present a new route-level search strategy, moving beyond the conventional step-by-step reactant prediction. Through comprehensive evaluations, we show that our LLM-augmented approach excels at retrosynthesis planning and extends naturally to the broader challenge of synthesizable molecular design.
[ "Large Language models", "Retrosynthesis planning", "Molecule design" ]
Accept (Oral)
https://openreview.net/pdf?id=5SqvDgWcJp
https://openreview.net/forum?id=5SqvDgWcJp
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "oC7J3SAiaF", "NCc85ryxEI", "DGVKNxkJbX", "924HoBTZEC" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740894258928, 1740807353911, 1741143437550, 1740905799908 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission16/Reviewer_toPB" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission16/Reviewer_o6pr" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission16/Reviewer_V2aF" ] ], "structured_content_str": [ "{\"title\": \"Good Paper\", \"review\": \"Summary:\\nThis paper explores the application of large language models (LLMs) in retrosynthesis planning and synthesizable molecular design. The authors propose an LLM-based framework, LLM-Syn-Planner, that encodes reaction pathways efficiently and integrates an evolutionary search algorithm to generate and optimize full synthetic routes. Experimental results show that LLM-Syn-Planner outperforms single-step LLM models and approaches state-of-the-art retrosynthesis performance, demonstrating its potential in chemical synthesis planning.\", \"strengths\": \"1. The paper presents a novel approach that utilizes LLMs for constrained decision-making tasks in retrosynthesis and molecular design.\\n2. The proposed method moves beyond traditional step-by-step predictions, using a structured decision program with evolutionary optimization.\\n3. The authors conduct extensive experiments on retrosynthesis datasets (USPTO, Pistachio) and compare performance against traditional machine learning and search-based approaches.\", \"weaknesses\": \"1. While the paper acknowledges that LLMs underperform in single-step retrosynthesis tasks, it could delve deeper into why LLMs struggle with reaction prediction compared to specialized ML models.\\n2. The method relies on a predefined set of reaction templates, which might limit its applicability to novel or underexplored reaction spaces.\\n3. Given that evolutionary search methods can be computationally intensive, a detailed discussion of runtime efficiency and practical feasibility for real-world applications is highly recommended.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Fairly good work\", \"review\": \"This paper introduces LLM-Syn-Planner, a framework leveraging large language models (LLMs) for retrosynthesis planning and synthesizable molecular design. Key innovations include: Linear Route Encoding: A sequential decision-making format replacing traditional tree structures, reducing complexity for LLMs; Evolutionary Search Algorithm: Combines LLM-generated pathways with mutation operators and partial rewards, achieving 98.5% solve rates on USPTO-EASY; and Synthesizable Molecular Design Integration: Links molecular optimization (via MolLEO) with synthesis planning, ensuring 100% synthesizability in optimized molecules (Figure 3).\\n\\nStrengths\\n1. Novel Evolutionary Strategy: The mutation-selection loop (Algorithm 1) effectively optimizes full synthesis pathways rather than individual steps, addressing LLMs' single-step prediction limitations (Section 4.2).\\n2. Practical Formulation: Linear route formatting reduces token overhead by 40% compared to tree structures while maintaining chemical validity (Section 3.1).\\n3. Holistic Evaluation: Three-level validation (molecule, reaction, route) ensures chemically plausible outputs (Table 1).\\n4. Resource Efficiency: Achieves competitive results with 500 model calls versus 50,000+ in MCTS/Retro* baselines (Table 2).\\n\\nWeaknesses and Recommendations\\n1. GPT-4o Dependency: All experiments rely on proprietary GPT-4o (2024-11-20), raising reproducibility concerns. Testing open-source LLMs (e.g., Llama-3) would strengthen claims about generalizability.\\n2. Simulation-Centric Validation: Real-world synthesis challenges (catalyst availability, reaction yields) are absent. A case study with lab validation is critical for practical impact.\\n3. Limited Baseline Comparisons: Omits recent LLM-based retrosynthesis tools (e.g., BATGPTChem [Yang et al. 2024], SynASK [Zhang et al. 2025]).\\n4. Opaque Cost Analysis: No discussion of API costs ($0.03/1k tokens) or latency for industrial-scale deployment.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"LLM-AUGMENTED CHEMICAL SYNTHESIS AND DESIGN DECISION PROGRAMS\", \"review\": \"1. Introduces LLM-Syn-Planner, an LLM-driven evolutionary search framework for retrosynthesis and molecule design.\\n\\n2. Moves beyond single-step retrosynthesis models by optimizing entire multi-step synthetic pathways.\\n\\n3. Uses LLMs for full-route retrosynthesis planning, improving over standard Monte Carlo Tree Search (MCTS) and Retro*.\\n\\n4. Achieves state-of-the-art performance on retrosynthesis benchmarks (USPTO, Pistachio) while ensuring synthesizability.\\n\\n5. Integrates retrieval-augmented generation (RAG) to guide LLMs in retrosynthesis decision-making.\\n\\n6. Incorporates molecule similarity-based retrieval to refine synthesis route generation.\\n\\n7. Balances molecule optimization and synthetic accessibility using a multi-step evolutionary strategy.\\n\\n8. Outperforms traditional search-based retrosynthesis models on complex datasets with higher solve rates.\\n\\n9. Computational efficiency not fully explored\\u2014impact of LLM inference cost vs. traditional retrosynthesis pipelines unclear.\\n\\n10. Evaluation is limited to in silico performance\\u2014real-world experimental synthesis validation is missing.\\n\\n11. Potential biases in retrosynthesis suggestions\\u2014may favor known reaction templates over novel pathways.\\n\\n12. Comparison against reinforcement learning-based retrosynthesis models (e.g., PDVN, Double-ended search) missing.\\n\\n13. Limited interpretability of LLM-generated synthesis plans\\u2014unclear decision rationale for chosen reaction steps.\\n\\n14. No human-in-the-loop assessment\\u2014unclear how well expert chemists would trust or refine generated routes.\\n\\n15. Strong work but requires further experimental validation, RL-based comparisons, and interpretability improvements.\", \"rating\": \"7\", \"confidence\": \"5\"}" ] }
43XMKuTTK0
Agent S: An Open Agentic Framework that Uses Computers Like a Human
[ "Saaket Agashe", "Jiuzhou Han", "Shuyu Gan", "Jiachen Yang", "Ang Li", "Xin Eric Wang" ]
We present Agent S, an open agentic framework that enables autonomous interaction with computers through Graphical User Interface (GUI), aimed at transforming human-computer interaction by automating complex, multi-step tasks. Agent S addresses three key challenges in automating computer tasks: acquiring domain-specific knowledge, planning over long task horizons, and handling dynamic, non-uniform interfaces. To this end, Agent S introduces experience-augmented hierarchical planning, which learns from external knowledge search and internal experience retrieval at multiple levels, facilitating efficient task planning and subtask execution. In addition, it employs an Agent-Computer Interface (ACI) to better elicit the reasoning and control capabilities of GUI agents based on Multimodal Large Language Models (MLLMs). Evaluation on the OSWorld benchmark shows that Agent S outperforms the baseline by 9.37% on success rate (an 83.6% relative improvement) and achieves a new state-of-the-art. Comprehensive analysis highlights the effectiveness of individual components and provides insights for future improvements. Furthermore, Agent S demonstrates broad generalizability to different operating systems on a newly-released WindowsAgentArena benchmark. Code will be made publicly available.
[ "Large Vision and Language Model", "Agents", "Retrieval Augmented Generation", "GUI", "Large Language Models", "Agent Computer Interface" ]
Accept (Oral)
https://openreview.net/pdf?id=43XMKuTTK0
https://openreview.net/forum?id=43XMKuTTK0
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "xlcdFqPtue", "pFudmy7wOy", "AdZko6CC3f" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741143628784, 1741045220624, 1741059178192 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission28/Reviewer_xrhh" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission28/Reviewer_DNU9" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review - Agent S: An Open Agentic Framework that Uses Computers Like a Human\", \"review\": \"Pros:\\n1. This paper is well-written. The overall framework is promising and innovative.\\n2. The experimental results look promising and prove the framework's efficiency.\\n3. The implementation of narrative memory and episodic memory creates a robust memory system that enables continual learning.\\n4. The paper provides a detailed error analysis.\", \"cons\": \"1. The framework consists of many different modules, and it seems that the paper hasn't shared the source code. I am not sure about how complicated it is to implement such a framework in a real application scenario.\\n2. Despite improvements, the error analysis shows execution errors occur in 79.59% of failed tasks, suggesting significant challenges remain in reliable task completion.\\n\\nOverall, I like this paper and I strongly recommend to accept this paper.\", \"rating\": \"9\", \"confidence\": \"4\"}", "{\"title\": \"Review of Submission 28\", \"review\": [\"**Strengths**\", \"The paper effectively combines several components (hierarchical planning, experience retrieval, and the ACI) into a cohesive framework that significantly outperforms baseline approaches. The 83.6% relative improvement over the baseline on OSWorld effectively proves the effectiveness of the framework.\", \"The framework addresses real-world challenges in GUI automation that have practical applications for accessibility and productivity, representing progress toward more general-purpose AI systems that can operate in the same environments as humans.\", \"**Weaknesses**\", \"While the integration is novel, most individual components (hierarchical planning, memory retrieval, OCR augmentation) build on existing techniques in the literature. The paper would benefit from more clearly articulating the technical innovations beyond integration.\", \"The paper doesn't thoroughly address the scalability of the approach to more complex applications or tasks outside the benchmarks. The memory retrieval process might face challenges with very large experience pools or require better indexing approaches.\"], \"rating\": \"7\", \"confidence\": \"4\"}" ] }
3DPN43Zofa
META-LEARNING FOR SCIENTIFIC HYPOTHESIS GENERATION AND EXPERIMENTAL DESIGN
[ "Sandeep Ravindra Tengali" ]
Generating novel scientific hypotheses and designing experiments often requires deep domain expertise and substantial time investment. This paper proposes a meta-learning framework to accelerate hypothesis generation and experimental design using agentic AI systems. The approach trains AI agents across diverse scientific domains (e.g., materials science, drug discovery, physics simulations), enabling rapid adaptation to new research problems with minimal labeled data. Specifically, a few-shot learning mechanism facilitates domain transfer, while a reinforcement learning (RL) engine autonomously refines experimental parameters under resource constraints. Experimental results demonstrate a 40% reduction in design iterations and 25% faster convergence on valid hypotheses, statistically validated with p < 0.05. These findings highlight the potential of meta-learning and RL to expedite scientific discovery, reduce trial-and-error, and improve research efficiency. Future work will explore formal theoretical guarantees, benchmarking against SOTA approaches, and real-world validation in laboratory settings.
[ "Meta-Learning", "Reinforcement Learning", "Scientific Discovery", "Few-Shot Learning", "Hypothesis Generation", "Experimental Design", "Agentic AI", "Multi-Domain Adaptation", "Bayesian Optimization", "Automated Experimentation" ]
Reject
https://openreview.net/pdf?id=3DPN43Zofa
https://openreview.net/forum?id=3DPN43Zofa
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "zcfvNWzmji", "seCGnDkXIj", "hQX8LWraFU", "EB6ofH8pg6" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740897112860, 1740702907377, 1741143341092, 1740763051388 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission14/Reviewer_6M7T" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission14/Reviewer_4fip" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission14/Reviewer_t4k1" ] ], "structured_content_str": [ "{\"title\": \"Serious Concerns on Structure, Rigor, and Authenticity\", \"review\": \"Summary:\\n\\nThis paper presents a meta-learning framework that integrates few-shot learning and reinforcement learning (RL) to automate scientific hypothesis generation and experimental design. The authors claim that their approach accelerates discovery across multiple domains, including materials science, drug discovery, and physics simulations. However, the manuscript primarily consists of bullet points rather than fully developed text, raising serious concerns regarding its originality, coherence, and scientific rigor.\", \"weaknesses\": \"1. The manuscript is largely composed of bullet points, and lacks detailed explanations, making it difficult to assess its depth and clarity.\\n\\n2. It fails to provide essential sections typically expected in a scientific paper, such as a well-structured Introduction, a clear Problem Statement, a thorough Related Work review, detailed Implementation and Experimental Methods, and a meaningful Result Analysis. The absence of these critical components gives the impression of minimal human effort or intellectual contribution.\\n\\n3. A manual verification of citations revealed discrepancies. For instance, the second reference in the bibliography\\u2014 Tabor, Z., Newton, K. F., & Brightman, I. (2022). Automated hypothesis generation in materials design using few-shot neural architectures. Advanced Materials, 34(12), 2106813.\\u2014could not be found despite extensive searching, raising concerns about citation integrity.\\n\\nOverall, this paper exhibits signs of potential AI-generated content. Even if human-authored, it suffers from fundamental weaknesses in theoretical grounding, empirical validation, and scientific contribution, rendering it unsuitable for publication.\", \"rating\": \"2\", \"confidence\": \"5\"}", "{\"title\": \"Need more work\", \"review\": \"This paper proposes a meta-learning framework combining few-shot learning and reinforcement learning for scientific hypothesis generation and experimental design across multiple domains.\", \"strengths\": \"1. Addresses an important problem with potential cross-domain applications\", \"weaknesses\": \"1. Lacks Technical Details:\\nNo concrete algorithms or implementation details.\\nResults appear aspirational rather than from completed experiments.\\nClaims statistical significance without methodological details.\\n2. Insufficient Evaluation:\\nPerformance claims lack proper experimental validation.\\nBaseline comparisons are superficial.\\nNo specific details on how experiments were conducted.\\n\\nOverall, I feel this paper needs to be significantly polished and expanded for details.\", \"rating\": \"3\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"Not good enough.\", \"review\": \"This paper works on a meta-learning framework to accelerate hypothesis generation and experimental design using agentic AI systems. The proposed approach can enable AI agents to learn across diverse scientific domains with minimal labeled data adapting different domains. Overall, the problem is innovative but the paper is not in good quality and the experiments and the analysis are not sufficient to prove the effectiveness of the proposed method.\", \"quality\": \"The paper is not fully developed with no clear clarification about the prior work and detailed explanation of the methodology. The experiments cover different scientific domains which are good but the results show only one domain with not sufficient analysis.\", \"clarity\": \"The paper clearly shows the challenges of using AI for scientific discoveries. However, it does not clearly describe the proposed framework and the experimental setup and evaluation process. Moreover, the experiments are not completely finished and are not sufficient to show the effectiveness of the proposed framework.\", \"originality\": \"The problem and methodologies proposed in this paper are novel and important for AI in scientific discoveries.\", \"pros\": \"1. The problem of using AI agents to enhance scientific discovery is important and the methodologies shown in this paper are novel.\\n2. The tasks and datasets in this paper show a diversity of scientific discovery domains.\", \"cons\": \"1. The methodology behind this paper is not clearly presented.\\n2. The experimental results are not sufficient enough to show the effectiveness of the proposed framework.\\n3. The paper is not well-written and can be further improved.\", \"rating\": \"3\", \"confidence\": \"4\"}" ] }
21TqI2gJOa
Federated Learning for Decentralized Scientific Collaboration: Privacy-Preserving Multi-Agent AI for Cross-Domain Research
[ "Sandeep Ravindra Tengali" ]
Scientific collaboration often requires multi-institutional AI training, yet privacy concerns, regulatory constraints, and data heterogeneity hinder centralized model development. This paper introduces a federated learning (FL) framework that enables scientific agents to collaboratively refine AI models without sharing raw data. By integrating secure aggregation, differential privacy, and multi-agent orchestration, the system ensures efficient cross-domain knowledge transfer in applications like genomics, medical research, and climate science. Proposed method achieves 35% faster model convergence compared to single-institution baselines, validated with $p < 0.05$, while maintaining low privacy leakage risk. Unlike traditional FL, our framework incorporates agentic AI coordination, allowing domain-specific adaptation and conflict resolution across institutions. We discuss scalability challenges, propose hierarchical FL solutions, and outline future work in theoretical guarantees and real-world deployment. This approach presents a scalable and privacy-preserving alternative to centralized AI training, accelerating scientific discovery while respecting data sovereignty.
[ "Federated Learning", "Multi-Agent Systems", "Privacy-Preserving AI", "Decentralized Collaboration", "Secure Aggregation", "Differential Privacy", "Meta-Learning", "Scientific AI", "Cross-Domain Knowledge Transfer", "Hierarchical FL" ]
Reject
https://openreview.net/pdf?id=21TqI2gJOa
https://openreview.net/forum?id=21TqI2gJOa
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "jvLQ4nFnba", "e7p2CTLamL", "XNcAG60lFC", "AnaKVReNfG" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1740746281991, 1740805178643, 1740763859696, 1741143392884 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Submission15/Reviewer_7PXz" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission15/Reviewer_7MrB" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission15/Reviewer_HuLu" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Lack of theoretical depth and real-world validation\", \"review\": \"Summary of Contributions\\nThis paper proposes a federated learning (FL) framework tailored for decentralized scientific collaboration, featuring: 1. Multi-Agent Orchestration: Domain-specific coordinators (genomics, climate, etc.) that mediate cross-institutional model updates while resolving conflicts. 2. Privacy-Preserving Techniques: Integration of secure aggregation and differential privacy, reducing membership inference risks to \\\"Low\\\" (Table 1). 3. Empirical Validation: Demonstrates 35% faster convergence versus single-domain training across genomics, medical imaging, and climate forecasting tasks.\\n\\nStrengths\\n1. Domain-Specific Customization: The multi-agent orchestrator (Section 4.4) effectively bridges heterogeneous scientific domains, outperforming vanilla FedAvg by 2\\u20133% in accuracy (Table 1).\\n2. Practical Relevance: Addresses critical barriers to scientific collaboration (privacy regulations, data silos) with clear industry applications (Section 2).\\n3. Comprehensive Evaluation: Tests on three distinct domains (genomics, medical imaging, climate) with rigorous privacy leakage metrics.\\n\\nWeaknesses and Recommendations\\n1. Theoretical Gaps: While methodology is well-described, the paper lacks a formal analysis of convergence guarantees or stability under domain shift. Deriving bounds for multi-agent FL equilibria would strengthen theoretical rigor.\\n2. Limited Real-World Testing: Experiments rely on simulated nodes (Section 5.1). A pilot deployment (e.g., hospital networks) is needed to validate networking and governance assumptions.\\n3. Scalability Oversights: No discussion of hierarchical FL or adaptive update protocols for scaling beyond 100+ institutions, a noted concern in prior work.\", \"suggested_improvements\": \"1. Add a theoretical section analyzing convergence under domain heterogeneity.\\n2. Include latency/power metrics for real-world deployment feasibility.\\n3. Benchmark against Bayesian FL methods to address scalability.\", \"rating\": \"5\", \"confidence\": \"3\"}", "{\"title\": \"Insufficient experimental detail, lack of theoretical rigor concerning convergence, minimal elaboration on domain-specific adaptations, and writing quality that could be enhanced for clarity and formality.\", \"review\": \"Summary:\\nThis paper proposes a federated learning framework tailored to multi-institution, decentralized scientific collaborations. Instead of pooling data at a central server, each institution trains models locally and shares only updates or model parameters. The authors introduce an agentic AI orchestrator that coordinates multi-agent interactions and domain-specific adaptations, aiming to accelerate learning and reduce communication overhead.\", \"strengths\": \"1. The presentation of key points is straightforward.\\n2. The authors analyze the potential limitations and future directions for their work.\", \"weaknesses\": \"1. This quality of writing is not good; using more formal words would make it better and concise.\\n2. The paper\\u2019s structure lacks clarity and coherence, omitting key experimental details. For example, the experimental setting of table 1 is unclear. What are the results for other datasets (as the authors claimed they conducted experiments on datasets of genomics, medical imaging, and climate sensors)?\\n3. While the approach is well-motivated and supported by empirical evidence, the paper lacks a thorough theoretical grounding regarding convergence guarantees or equilibrium behavior in multi-agent federated setups.\\n4. Although the multi-agent orchestration is described, there is little detail on how domain-specific tasks or domain shifts are handled in practice. For example, medical imaging and genomics can differ considerably in data dimensionality, labeling approaches, or performance metrics. More elaboration on domain adaptations, conflict resolution strategies, or how to weigh domain priorities would be beneficial.\", \"rating\": \"2\", \"confidence\": \"4\"}", "{\"title\": \"Not good enough.\", \"review\": \"This paper works on a federated learning framework for AI-driven scientific collaboration across geographically dispersed institutions. Instead of relying on centralized models or pooled datasets, the proposed approach enables distributed\\nscientific agents to train AI models while preserving local data privacy. Overall, the problem is innovative but the paper is not in good quality and the experiments and the analysis are not sufficient to prove the effectiveness of the proposed method.\", \"quality\": \"The paper is not fully developed with no clear clarification about the detailed explanation of the methodology. The experiments cover different scientific domains which are good but the results are not sufficient with no specific analysis to show the effectiveness of the proposed framework.\", \"clarity\": \"The paper highlights the challenges of applying federated learning in multi-agent systems for knowledge sharing and learning while maintaining privacy. However, it lacks a clear description of the proposed framework, as well as details on the experimental setup and evaluation process. Additionally, the experiments are incomplete and insufficient to demonstrate the effectiveness of the proposed framework.\", \"originality\": \"The problem and methodologies proposed in this paper are novel and important for multi-agent knowledge sharing and learning while preserving privacy.\", \"pros\": \"1. The problem of using federated learning to enhance knowledge learning across agents is important and the methodologies shown in this paper are novel.\\n2. The tasks and datasets in this paper show a diversity of scientific domains.\", \"cons\": \"1. The methodology behind this paper is not clearly presented.\\n2. The experimental results are not sufficient enough to show the effectiveness of the proposed framework.\\n3. The paper is not well-written and can be further improved.\", \"rating\": \"3\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}" ] }
1WUCSNAjjB
AstroAgents: A Multi-Agent AI for Hypothesis Generation from Mass Spectrometry Data
[ "Daniel Saeedi", "Denise K. Buckner", "Jose C. Aponte", "Amirali Aghazadeh" ]
With upcoming sample return missions across the solar system and the increasing availability of mass spectrometry data, there is an urgent need for methods that analyze such data within the context of existing astrobiology literature and generate plausible hypotheses regarding the emergence of life on Earth. Hypothesis generation from mass spectrometry data is challenging due to factors such as environmental contaminants, the complexity of spectral peaks, and difficulties in cross-matching these peaks with prior studies. To address these challenges, we introduce AstroAgents, a large language model-based, multi-agent AI system for hypothesis generation from mass spectrometry data. AstroAgents is structured around eight collaborative agents: a data analyst, a planner, three domain scientists, an accumulator, a literature reviewer, and a critic. The system processes mass spectrometry data alongside user-provided research papers. The data analyst interprets the data, and the planner delegates specific segments to the scientist agents for in-depth exploration. The accumulator then collects and deduplicates the generated hypotheses, and the literature reviewer identifies relevant literature using Semantic Scholar. Finally, the critic evaluates the hypotheses, offering rigorous suggestions for improvement. To assess AstroAgents, an astrobiology expert evaluated the novelty and validity of more than a hundred hypotheses generated from data obtained from eight meteorites and ten soil samples. Of these hypotheses, surprisingly, 36% were identified as plausible, and among those, 66% were novel.
[ "Multi-Agent Systems", "Mass Spectrometry Data", "Hypothesis Generation", "Scientific Discovery", "Collaborative AI", "Astrobiology" ]
Accept (Oral)
https://openreview.net/pdf?id=1WUCSNAjjB
https://openreview.net/forum?id=1WUCSNAjjB
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "mRS2KcJuK0", "iQ9R7E7RIb", "JihqPwkUvK" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741148074487, 1741027959437, 1741068442899 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission26/Reviewer_h3RG" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission26/Reviewer_64zF" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Official Review\", \"review\": [\"Strength:\", \"The paper presents an agentic framework for automating hypothesis generation from mass spectrometry data.\", \"Comprehensive experiments evaluate two large language model variants, with thorough human evaluation by domain experts providing credibility to the results.\", \"This work is clearly presented and easy to follow.\"], \"weakness_and_comments\": [\"The paper lacks mathematical formalization of the problem, making it difficult to precisely understand the nature of mass spectrometry data processing and the specific inputs/outputs at each stage of the framework.\", \"Several critical components are inadequately explained, creating confusion. For instance, Line 149 states \\\"the agent refines its findings based on feedback from the critic agent,\\\" but this interaction isn't clearly depicted in Figure 1, leaving the feedback mechanism ambiguous.\", \"Section 2.4 fails to specify important implementation details, including how many artificial researchers are deployed in the framework and what criteria determine this number, which impacts reproducibility.\", \"The results section provides insufficient analysis of the presented data. Table 1 contains important findings but is neither properly referenced in the text nor analyzed to explain the implications of the performance metrics, missing an opportunity to derive meaningful insights from the experimental results.\", \"The manuscript can be rated higher if the comments in weakness section are addressed.\"], \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"AstroAgents: A Multi-Agent AI for Hypothesis Generation from Mass Spectrometry Data\", \"review\": \"This paper present AstroAgents, a multi-agent AI system designed to analyze mass spectrometry data from meteorites and soil samples to generate hypotheses about the emergence of life.\\n\\n### **Strength:**\\n\\n1) **Novel application**: The paper demonstrates how LLMs can effectively bridge multiple scientific disciplines (chemistry, astronomy, biology) to generate hypotheses about origins of life.\\n\\n2) **Clear multi-agent design**: The system's architecture with specialized agents for different tasks is well-conceived, with clear roles and interactions between agents.\\n\\n3) **Literature-Grounded Reasoning**: The integration of the Literature Review agent that allows hypotheses to be contextually aligned with existing research is promising and interesting.\\n\\n4) **Concrete examples**: The paper includes several well-documented examples of the hypotheses generated, making the results tangible and understandable. \\n\\n\\n### **Weakness**: \\n\\n1) **Novelty assessment**: The determination of whether a hypothesis is \\\"novel\\\" seems subjective, and the paper doesn't clarify how this was established relative to existing literature.\\n\\n2) **Baseline comparison**: There's no comparison with simpler approaches, making it hard to gauge the real improvement over alternatives\", \"rating\": \"8\", \"confidence\": \"4\"}" ] }
0XK0ZETtBO
Large Language Models Are Innate Crystal Structure Generators
[ "Jingru Gan", "Peichen Zhong", "Yuanqi Du", "Yanqiao Zhu", "Chenru Duan", "Haorui Wang", "Daniel Schwalbe-Koda", "Carla P Gomes", "Kristin Persson", "Wei Wang" ]
Crystal structure generation is fundamental to materials discovery, enabling the prediction of novel materials with desired properties. While existing approaches leverage Large Language Models (LLMs) through extensive fine-tuning on materials databases, we show that pre-trained LLMs can inherently generate stable crystal structures without additional training. Our novel framework MatLLMSearch integrates pre-trained LLMs with evolutionary search algorithms, achieving a 78.38% metastable rate validated by machine learning interatomic potentials and 31.7% DFT-verified stability via quantum mechanical calculations, outperforming specialized models such as CrystalTextLLM. Beyond crystal structure generation, we further demonstrate that our framework can be readily adapted to diverse materials design tasks, including crystal structure prediction and multi-objective optimization of properties such as deformation energy and bulk modulus, all without fine-tuning. These results establish pre-trained LLMs as versatile and effective tools for materials discovery, opening up new venues for crystal structure generation with reduced computational overhead and broader accessibility.
[ "Crystal Structure Generation", "Large Language Models", "Evolutionary Search" ]
Accept (Oral)
https://openreview.net/pdf?id=0XK0ZETtBO
https://openreview.net/forum?id=0XK0ZETtBO
ICLR.cc/2025/Workshop/AgenticAI
2025
{ "note_id": [ "M2DPnDlqh0", "KX3Bmh6VoN", "95pAubqGxR", "45ChmELqnD" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741143011044, 1741112701017, 1740937575244, 1740905545417 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/AgenticAI/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission10/Reviewer_hm3J" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission10/Reviewer_H1nZ" ], [ "ICLR.cc/2025/Workshop/AgenticAI/Submission10/Reviewer_ScPH" ] ], "structured_content_str": [ "{\"decision\": \"Accept (Oral)\", \"title\": \"Paper Decision\"}", "{\"title\": \"Large Language Models Are Innate Crystal Structure Generators\", \"review\": \"The paper Large Language Models Are Innate Crystal Structure Generators presents MATLLMSEARCH, an evolutionary framework that combines pre-trained LLMs with evolutionary search algorithms to generate and optimize crystal structures without requiring fine-tuning. Unlike conventional generative models that demand extensive training on materials-specific datasets, MATLLMSEARCH exploits LLM inference alone, demonstrating that pre-trained LLMs inherently encode sufficient chemical and structural knowledge for stable crystal generation. By iteratively selecting, reproducing, and evaluating structures, the framework integrates LLM-generated candidates with thermodynamic validation using CHGNet and DFT calculations, achieving a 76.81% metastable generation rate and 31.7% DFT-verified stability, surpassing fine-tuned models like CrystalTextLLM while significantly reducing computational costs. Beyond structure generation, MATLLMSEARCH extends to crystal structure prediction and multi-objective optimization, optimizing properties like bulk modulus and facilitating the discovery of novel metastable polymorphs. While the paper is well-written, the evaluation primarily compares MATLLMSEARCH against a single baseline (CrystalTextLLM), and additional comparisons would strengthen the analysis. Additionally, clarification on the meaning of \\\"-\\\" in Table 2 would be helpful. Despite eliminating fine-tuning overhead, the framework's effectiveness remains highly dependent on prompt engineering, and the practical synthesizability of the generated structures is yet to be verified, highlighting the need for experimental validation to bridge computational predictions with real-world materials discovery.\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"interesting paper\", \"review\": \"This paper proposed a framework MATLLMSEARCH which integrates prerained LLMs with evolutionary search algorithms.\\nThe paper included extensive empirical studies on different tasks to illustrate the insights.\", \"rating\": \"7\", \"confidence\": \"2\"}", "{\"title\": \"LARGE LANGUAGE MODELS ARE INNATE CRYSTAL STRUCTURE GENERATOR\", \"review\": \"1. Demonstrates LLMs as effective crystal structure generators without fine-tuning, reducing computational overhead.\\n\\n2. Introduces MATLLMSEARCH, an LLM-driven evolutionary search framework for materials discovery.\\n\\n3. Outperforms specialized models (e.g., CrystalTextLLM), achieving higher metastable and DFT-verified stable structures.\\n\\n4. Applies evolutionary search to guide LLMs in generating structurally valid and thermodynamically stable crystals.\\n\\n5. Expands beyond generation to prediction and multi-objective optimization (e.g., bulk modulus, deformation energy).\\n\\n6. Validates results using machine learning interatomic potentials (MLIPs) and DFT calculations.\\n\\n7. Employs a structured workflow\\u2014Selection, Reproduction, and Evaluation\\u2014to iteratively refine structures.\\n\\n8. Benchmarks against diffusion-based and flow-based generative models but lacks reinforcement learning-based comparisons.\\n\\n9. Computational efficiency not fully explored\\u2014impact of LLM inference costs vs. traditional fine-tuned approaches unclear.\\n\\n10. Structure evaluation relies on CHGNet, with limited generalization tested across different MLIP methods.\\n\\n11. No ablation on LLM scale dependency\\u2014performance of smaller LLMs vs. 70B models is underexplored.\\n\\n12. Unclear adaptability to experimental validation and real-world synthesis feasibility.\\n\\n13. Potential biases in LLM-generated structures\\u2014may favor known compositions over truly novel discoveries.\\n\\n14 Does not explore hybrid approaches integrating fine-tuned models with generative LLMs.\\n\\n15. Promising results but needs more efficiency analysis, experimental validation, and expanded comparisons.\", \"rating\": \"7\", \"confidence\": \"5\"}" ] }
zJAm9nLdaQ
A Flexible Large Language Models Guardrail Development Methodology Applied to Off-Topic Prompt Detection
[ "Gabriel Chua", "Chan Shing Yee", "Shaun Khoo" ]
Large Language Models (LLMs) are prone to off-topic misuse, where users may prompt these models to perform tasks beyond their in- tended scope. Current guardrails, which often rely on curated examples or custom classifiers, suffer from high false-positive rates, limited adaptability, and the impracticality of requiring real-world data that isn’t available in pre-production. In this paper, we introduce a flexible, data-free guardrail development methodology that addresses these challenges. By thoroughly defining the problem space qualitatively and passing this to an LLM to generate diverse prompts, we construct a synthetic dataset to benchmark and train off-topic guardrails that outperform heuristic approaches. Addition- ally, by framing the task as classifying whether the user prompt is relevant with respect to the system prompt, our guardrails effectively generalize to other misuse categories, including jailbreak and harmful prompts. Lastly, we further contribute to the field by open- sourcing both the synthetic dataset and the off-topic guardrail models, providing valuable resources for developing guardrails in pre-production environments and supporting future research and development in LLM safety.
[ "LLM", "Guardrails", "Synthetic Data" ]
Accept
https://openreview.net/pdf?id=zJAm9nLdaQ
https://openreview.net/forum?id=zJAm9nLdaQ
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "xVZ1I9p2oI", "XjNMdfHYge", "GaxNQOeThX", "FCcfh6vvit" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740021806372, 1741079948092, 1740706541026, 1739707566725 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission142/Reviewer_PTXv" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission142/Reviewer_XGDX" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission142/Reviewer_K8o5" ] ], "structured_content_str": [ "{\"title\": \"Good paper\", \"review\": \"Pro:\\n\\nNovel Methodology\\nThe proposed approach eliminates the need for real-world data in the pre-deployment phase by using synthetic data generation, which is both innovative and practical.\\n\\nComprehensive Framework\", \"the_paper_clearly_outlines_a_three_step_guardrail_development_framework\": \"qualitative problem analysis, synthetic data generation via LLM prompting, and model training.\\nThis systematic approach ensures that edge cases, diversity in prompt styles, and domain-specific challenges are well addressed.\\n\\n\\nGeneralization and Deployment Considerations\\nThe method not only improves off-topic detection but also generalizes to other misuse cases (e.g., jailbreak and harmful prompts).\\nDetailed discussion on deployment issues\\u2014such as threshold tuning, integration with alignment methods, model choice, and active learning pipelines\\u2014shows strong practical relevance. (But please notice that the jailbreak or harmful request are easily rejected by the aligned model, maybe it can be better to test them in uncensored model)\", \"could_improve\": \"Reliance on Synthetic Data\\nWhile synthetic data generation is a powerful idea, the paper acknowledges that such data may not fully capture the nuances of real-world user behavior, especially in multilingual or high-context scenarios.\\nFuture work could explore integrating real user data (through active learning) to further refine and validate the guardrail classifiers.\", \"rating\": \"8\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"comment\": \"The paper's reliance on synthetic data raises concerns about its ability to generalize to real-world user behavior, particularly in multilingual or high-context scenarios. Additionally, the method exhibits significant performance drops when applied to different datasets, suggesting potential overfitting and a lack of robustness, which could have been better evaluated using data from another LLM or real user interactions.\", \"title\": \"Paper Decision\"}", "{\"title\": \"In this work authors work on off-topic detection by showing a methodology of generating data and later using it to create off-topic classifier. This method can be used with chat applications that have a specific topic designed for them allowing for light-weight detector of off-topic queries.\", \"review\": \"This work is well written, and the idea of off-topic detection seems like a good research direction for LLMs developed with specific topics in mind.\", \"pros\": [\"The authors show how one could use their method in an end-to-end manner, starting from the creation of the data and obtaining an off-topic classifier.\", \"Those classifiers are lightweight, making them useful in real-time applications.\"], \"cons\": [\"We can observe a large drop in performance when this method is applied to other datasets, potentially highlighting its problem with generalization and possible overfitting.\", \"More evaluations on data that don't come from the same distribution from which training data was generated would better show how well this method would generalize. The authors could have tried generating an eval set with another LLM.\", \"I would like the authors to provide hyperparameters used for training.\", \"L259 missing citation?\", \"L314 missing reference to Appendix\"], \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Review for \\\"A Flexible Large Language Models Guardrail Development Methodology Applied to Off-Topic Prompt Detection\\\"\", \"review\": [\"# Summary\", \"The paper presents a practical methodology for developing LLM guardrails focused on detecting off-topic user prompts. The approach leverages synthetic data generation via LLMs to train off-topic classifiers without requiring real-world data. The authors compare bi-encoder and cross-encoder models, demonstrating high performance on synthetic benchmarks and generalization to jailbreak and harmful prompt detection. The paper also releases synthetic datasets and trained models to support the broader research community.\", \"## Strengths\", \"The paper addresses a relevant safety problem in LLM deployment\\u2014off-topic misuse\\u2014that is often underexplored compared to harmful content.\", \"The data-free guardrail development methodology is a pragmatic approach to overcome pre-deployment data scarcity.\", \"Strong empirical comparisons with baseline classifiers and generalization assessments on jailbreak and harmful prompt benchmarks.\", \"The dataset and models contribute to the development of LLM safety solutions in real-world settings.\", \"## Weaknesses\", \"Results are primarily based on synthetic data, with minimal real user data evaluation. The distribution gap between synthetic and real-world usage remains a concern.\", \"Off-topic detection closely resembles domain relevance classification, reducing theoretical novelty.\", \"Insufficient analysis of edge cases and borderline prompts in deployment contexts.\"], \"rating\": \"7\", \"confidence\": \"4\"}" ] }
yr0QEDDTSO
Rethinking Hallucinations: Correctness, Consistency, and Prompt Multiplicity
[ "Prakhar Ganesh", "Reza Shokri", "Golnoosh Farnadi" ]
Large language models (LLMs) are known to "hallucinate" by generating false or misleading outputs. Hallucinations pose various harms, from erosion of trust to widespread misinformation. Existing hallucination evaluation, however, focuses only on "correctness" and often overlooks "consistency", necessary to distinguish and address these harms. To bridge this gap, we introduce _prompt multiplicity_, a framework for quantifying consistency through prompt sensitivity. Our analysis reveals significant multiplicity (over 50% inconsistency in benchmarks like Med-HALT), suggesting that hallucination-related harms have been severely underestimated. Furthermore, we study the role of consistency in hallucination detection and mitigation. We find that: (a) detection techniques capture consistency, not correctness, and (b) mitigation techniques like RAG can introduce additional inconsistencies. By integrating prompt multiplicity into hallucination evaluation, we provide an improved framework of potential harms and uncover critical limitations in current detection and mitigation strategies.
[ "hallucinations", "evaluation", "prompt sensitivity" ]
Accept
https://openreview.net/pdf?id=yr0QEDDTSO
https://openreview.net/forum?id=yr0QEDDTSO
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "vqAdRAsrrL", "b74TqlmSVE", "ZZtBVVdduS" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741103649412, 1740957339295, 1740235292545 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission102/Reviewer_rVrC" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission102/Reviewer_3FUh" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review\", \"review\": \"This paper makes a significant contribution to our understanding of hallucinations in LLMs by introducing the concept of \\\"prompt multiplicity\\\" - how variations in prompts can cause models to switch between correct and incorrect answers despite maintaining similar overall accuracy scores. The authors conduct extensive experiments across multiple benchmarks and model families to demonstrate that existing hallucination evaluation methods fail to capture this important dimension of model reliability.\", \"strengths\": [\"The research formalizes prompt multiplicity as a metric for evaluating hallucination stability, building on existing multiplicity literature and adapting it specifically for LLM evaluation.\", \"It features a comprehensive experimental design covering 6 different benchmarks and 16 models across various families.\", \"The authors propose a refined taxonomy that separates hallucinations into \\\"prompt-agnostic errors\\\" (consistent factual mistakes) and \\\"randomness\\\" (inconsistent answers due to prompt sensitivity), which helps better understand the different types of potential harms.\", \"The research demonstrates that existing uncertainty-based hallucination detection techniques primarily identify randomness rather than factual errors, highlighting the limitations of current approaches.\"], \"weaknesses\": [\"The study relies exclusively on MCQ-style benchmarks. Extending this framework to different benchmark types, particularly those involving free-form generation, would provide more comprehensive insights into prompt multiplicity across diverse task settings.\", \"Most variations are created simply by shuffling demonstrations. A more systematic investigation into how demonstration quality and location affect hallucination rates would likely reveal stronger factors influencing prompt multiplicity than mere ordering.\", \"The MCQ format significantly reduces the potential for \\\"randomness\\\" by limiting the output to essentially a single token from a small set of choices. This artificial constraint likely underestimates the extent of randomness-based hallucinations that would occur in more open-ended, realistic use cases where models generate unrestricted text.\"], \"rating\": \"8\", \"confidence\": \"4\"}", "{\"title\": \"Well-written, contribution is minor\", \"review\": \"# Summary\\nThis work examines the influence of prompt formulation on LLM hallucinations. They predominantly focus on shuffling to vary prompts. \\nThe paper introduces prompt multiplicity, which measures the output agreement (on a given question) across different prompt formulations. Using this approach, the authors break down errors made by LLMs into prompt-sensitive and prompt-agnostic. \\n\\n# Review: \\nOverall, I'm unsure about this paper. The paper is generally well-written, but the contribution is minor. \\n\\nThe observation that prompt design can influence which questions a model answers correctly\\u2014while maintaining the same overall accuracy\\u2014is interesting. The main difference with existing literature (on the influence of prompt design on accuracy) is the level of analysis. This work focuses on the question-level, whereas existing work focuses more on the (aggregate) dataset-level. \\n\\nHowever, I expected the authors to suggest a way to leverage this insight\\u2014 e.g., show how multiplicity could be used to improve performance or as a safety mechanism. Instead, they suggest using multiplicity to break down model performance. Concretely, they suggest breaking down correct answers into \\u201cprompt-agnostic factuality\\u201d and randomness. This measures whether the model is robust to prompt variations. While interesting, this does seem close to existing work (see implementation comments).\\nNext, they recommend breaking down incorrect answers into prompt-agnostic errors and randomness (prompt-sensitive errors). This distinguishes the cases in which the model consistently outputs an incorrect answer, or outputs different (but still incorrect) answers. \\nAs mentioned by the authors, uncertainty captures this axes as it can be used to capture 'randomness' vs non-random answers. \\n\\nAs acknowledged by the authors, this means that the main contribution of this framework is the distinction between prompt-agnostic factuality and prompt-agnostic errors. However, this breakdown is similar to the classic notions of \\u2018correctness\\u2019 and \\u2018robustness\\u2019. \\n\\nMy key takeaway from the paper is that prompts influence which questions a model answers correctly. However, there is not a *predictable* way in which the prompt influences the correctness (e.g., a proposed style or aspect). Therefore, it becomes more of a claim about robustness. While this is an interesting observation, I would have liked to see a more practical application of this insight.\", \"implementation_comments\": [\"The authors measure multiplicity using shuffling. Lu et al. (https://arxiv.org/abs/2104.08786, as cited by the authors) specifically examine how factuality is affected by shuffling, which diminishes the novelty of this contribution.\", \"Table 2: In the uncertainty community, the use of probabilities (or perplexity) has been criticized. The relationship between perplexity and sensitivity is clear, and therefore this result does not add as much. Instead, it would be more appropriate to report predictive entropy.\", \"# Minor details and suggestions\", \"Definition 1/2 -- should it be $y_i \\\\in \\\\mathbb{Z}+^n$?\", \"I'd recommend placing definition 5 before definition 3/4 \\u2014 it makes it easier for the reader.\", \"A more in-depth discussion of the uncertainty literature would be appropriate -- some of the key ideas here (e.g., disagreement across different prompt variations) relate to ideas in the classic uncertainty literature. For example, the same disagreement is leveraged in ensembles (https://arxiv.org/abs/1612.014740).\", \"Please state what value for $\\\\tau$ is used in the main text. It'd also be nice to see how the results change for different values of $\\\\tau$.\"], \"rating\": \"5\", \"confidence\": \"3\"}" ] }
xo2e14bczv
UNLEARNING GEO-CULTURAL STEREOTYPES IN MULTILINGUAL LLMS
[ "Alireza Dehghanpour Farashah", "Aditi Khandelwal", "Negar Rostamzadeh", "Golnoosh Farnadi" ]
As multilingual generative models become more widely used, most safety and fairness evaluation techniques still focus on English-language resources, while overlooking important cross-cultural factors. This limitation raises concerns about fairness and safety, particularly regarding geoculturally situated stereotypes that hinder the models’ global inclusivity. In this work, we present preliminary findings on the impact of stereotype unlearning across languages, specifically in English, French, and Hindi. Using an adapted version of the SeeGULL dataset, we analyze how unlearning stereotypes in one language influences other languages within multilingual large language models. Our study evaluates two model families, Llama-3.1-8B and Aya-Expanse-8B, to assess whether unlearning in one linguistic context transfers across languages, potentially mitigating or exacerbating biases in multilingual settings.
[ "Machine Unlearning", "Multilingual Large Language Models", "Fairness", "Geo-Cultural Stereotypes" ]
Accept
https://openreview.net/pdf?id=xo2e14bczv
https://openreview.net/forum?id=xo2e14bczv
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "lDPqu6qjfL", "kXdiq1EbjJ", "G96QWM6964", "BcqkvM6S4N" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741056796119, 1740563138581, 1740722385628, 1740848408504 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission97/Reviewer_etSH" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission97/Reviewer_1mrc" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission97/Reviewer_haLq" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review: Unlearning Geo-Cultural Stereotypes in Multilingual LLMs\", \"review\": \"Strengths\\n\\nThe paper makes a significant contribution through its cross-lingual approach to stereotype unlearning, addressing a critical gap in the field which has primarily focused on English-centric evaluations. By examining how unlearning in one language (English, French, or Hindi) transfers to others, the authors provide valuable insights into the factors affecting cross-lingual bias transfer, particularly highlighting the roles of linguistic similarity and training data volume.\\n\\nThe evaluation framework is exceptionally well-designed, utilizing a multiple-choice QA format with an \\\"Unknown\\\" option that provides a clear, quantifiable way to measure stereotype reduction. This approach makes it easier to track progress in model alignment and offers a more objective assessment than traditional bias measures. The inclusion of this neutral option is particularly innovative as it allows models to express uncertainty rather than forcing a potentially stereotypical response.\\n\\nThe translation of the SeeGULL dataset into French and Hindi represents a valuable resource for the research community. The authors' pragmatic approach of translating only the questions while maintaining consistent answer options across languages demonstrates a thoughtful balance between feasibility and methodological rigor, creating a multilingual stereotype evaluation benchmark that can be extended to additional languages in future work.\\n\\nPotential Improvements\\n\\nA significant methodological concern arises from using the same SeeGULL dataset for both unlearning and evaluation. This approach creates potential data leakage issues and makes it difficult to determine whether the models have truly learned to generalize unbiased responses or are simply memorizing patterns from the training data. Future work would benefit from clearly separated training and evaluation sets, possibly with different stereotype categories or formats to better assess generalization.\\n\\nThe paper lacks exploration of the stability of unlearning effects across different prompting strategies and over time. Given that prompt formulations can significantly impact model outputs, it would be valuable to understand whether the unlearned behaviors remain consistent when questions are rephrased or presented in different contexts. Additionally, examining whether these effects persist after the model processes many subsequent queries would provide important insights for real-world applications.\\n\\nFinally, the evaluation would be strengthened by additional metrics that capture more nuanced aspects of stereotype reduction. While the three-category classification (stereotypical, unbiased, other) provides a useful high-level view, it may miss subtler shifts in model behavior. Incorporating confidence scores, analyzing response latency, or examining the distribution of alternatives selected when avoiding stereotypical answers could reveal more detailed patterns in how models adapt to unlearning interventions across languages.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"review\", \"review\": \"The paper tackles the critical issue of geo-cultural stereotypes in multilingual LLMs, addressing a gap in fairness and AI safety evaluations that predominantly focus on English-language biases.\\n\\nPros and findings\\n1. the short paper is well structured and provides reasonable experiments to show the effectiveness of the unlearning methods across different multilingual models.\\n2. Results indicating that unlearning in English reduces biases in French but has a weaker effect in Hindi, which potentially shows the correlation among languages.\\n\\nCons\\n1. While the inclusion of French and Hindi is a step forward, it would be better if the study provides exploration on more languages.\\n2. The paper focuses primarily on multiple-choice stereotype recognition which may not fully capture the complexity of biases in real-world model outputs such as open-ended question answering.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"This paper investigates cross-lingual transfer effects of gradient ascent\\u2013based unlearning for mitigating geo-cultural stereotypes in multilingual LLMs. By adapting the SeeGULL dataset into French/Hindi QA formats and testing on Llama-3.1-8B and Aya-Expanse-8B, the study reveals stronger bias reduction transfer in linguistically/culturally similar languages (English\\u2192French) and models with diverse multilingual training strategies (Aya).\", \"review\": \"## Quality & Clarity:\\nThe paper tackles a critical gap in multilingual AI fairness by evaluating how stereotype unlearning in one language affects others. The methodology is technically sound, combining gradient ascent unlearning with KL-divergence regularization to preserve general capabilities. The QA adaptation of SeeGULL (7K stereotypes across 178 countries) into French/Hindi is well-executed, though human verification details for translations are sparse. Results are presented clearly through response distribution charts (Figs 2\\u20133) and GLUE benchmark comparisons (Table 1).\\n\\n## Originality:\\nThis is one of the first works to systematically study cross-lingual bias transfer in LLM unlearning. While prior research focused on monolingual bias mitigation (Gallegos et al., 2024) or privacy-focused unlearning (Yao et al., 2024), the adaptation of SeeGULL for multilingual evaluation and analysis of linguistic/cultural proximity effects break new ground.\\n\\n## Significance:\", \"the_findings_have_practical_implications_for_global_ai_deployment\": [\"Models like Aya-Expanse, trained with synthetic multilingual data, show 63% unbiased responses post-unlearning (vs. 32% baseline) with cross-lingual transfer.\", \"Unlearning effectiveness correlates with linguistic similarity (English\\u2192French > English\\u2192Hindi) and training data diversity.\", \"### Pros:\", \"Novel cross-lingual evaluation framework for bias unlearning\", \"Strong empirical validation on two model families\", \"Theoretically grounded loss function combining $L_{fgt}$, $L_{retain}$, and $L_{nor}$ (Eq. 1)\", \"Practical insights about model architecture impacts (Aya vs. Llama)\", \"### Cons:\", \"Limited to three languages; underrepresented languages (e.g., Swahili) are excluded\", \"No comparison to alternative unlearning methods (e.g., counterfactual interventions)\", \"Human evaluation missing for translated QA pairs\"], \"rating\": \"7\", \"confidence\": \"4\"}" ] }
wOA4bhb2ld
Measuring In-Context Computation Complexity via Hidden State Prediction
[ "Vincent Herrmann", "Róbert Csordás", "Jürgen Schmidhuber" ]
Detecting when a neural sequence model does "interesting" computation is an open problem. The next token prediction loss is a poor indicator: Low loss can stem from trivially predictable sequences that are uninteresting, while high loss may reflect unpredictable but also irrelevant information that can be ignored by the model. We propose a better metric: measuring the model's ability to predict its own future hidden states. We show empirically that this metric–in contrast to the next token prediction loss–correlates with the intuitive interestingness of the task. To measure predictability, we introduce the architecture-agnostic "prediction of hidden states" (PHi) layer that serves as an information bottleneck on the main pathway of the network (e.g., the residual stream in Transformers). We propose a novel learned predictive prior that enables us to measure the novel information gained in each computation step, which serves as our metric. We show empirically that our metric predicts the description length of formal languages learned in-context, the complexity of mathematical reasoning problems, and the correctness of self-generated reasoning chains.
[ "in-context learning", "interpretability" ]
Accept
https://openreview.net/pdf?id=wOA4bhb2ld
https://openreview.net/forum?id=wOA4bhb2ld
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "acooNMNrjP", "Xq1JBzoSgp", "QZcNZU8DTG", "46j0CVVhOD" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1740399753775, 1740883004664, 1740177925308, 1741084712951 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission121/Reviewer_kENj" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission121/Reviewer_sfyP" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission121/Reviewer_Qwkj" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"review\": \"### Summary\\nThis paper proposes a novel metric, the Prediction of Hidden States (PHi) loss, to measure the complexity of in-context computation in large language models (LLMs). Traditional next-token prediction loss fails to distinguish between meaningful computation and trivial memorization, prompting the authors to introduce PHi loss, which measures how well a model can predict its own future hidden states.\\nTo implement this, the authors introduce a PHi layer, which acts as an information bottleneck in the model's residual stream. The key idea is that the KL divergence between the posterior and the prior in the PHi layer quantifies the amount of new information learned at each step. The authors validate their approach with various experiments, showing that PHi loss correlates with task complexity in in-context language learning, mathematical reasoning, and self-generated reasoning chains.\\n\\n### Strongness\\n- Unlike traditional next-token loss, PHi loss provides insight into how much non-trivial computation is occurring within a model\\u2019s hidden states.It is architecture-agnostic and can be integrated into different neural architectures, including Transformers and RNNs.\\n- The authors demonstrate the effectiveness of PHi loss across in-context language learning, mathematical problem-solving, and reasoning tasks.Their findings indicate that higher PHi loss correlates with higher task complexity, making it a potentially useful metric for understanding model reasoning.\\n- The study extends beyond training from scratch by incorporating PHi layers into pretrained models like LLaMA-3B, providing a feasible post-hoc method for analyzing existing models.\\n- PHi loss could be used to improve interpretability, guide model training, or even serve as an intrinsic reward in reinforcement learning setups.\\n\\n### Questions\\n- A key question is whether optimizing for lower PHi loss actually leads to better model performance.\", \"rating\": \"9\", \"confidence\": \"4\"}", "{\"title\": \"Proposing a very interesting and effective way to quantify \\\"interesting\\\" computation in the in-context inference\", \"review\": \"The paper proposes a way to measure how much \\\"interesting\\\" computation is done during in-context sequence prediction. Instead of quantifying this using next-token prediction loss, which would have high loss even when the sequence is random (indicating no in-context learning is required), they propose predicting the next hidden state.\\n\\nThey insert a layer within the transformer to use the latent state as an information bottleneck and introduce a module to predict the next latent state based on the history of latents. Then, they compute the KL divergence between the prior (based on latent state history) and the posterior (conditioned on the next token's hidden state) and call it PHi loss.\\n\\nExperimental results compare \\\"boring\\\" tasks, such as tasks focused on memorization and random sequences, with \\\"interesting\\\" tasks, such as literature and coding. Their quantification method shows that PHi loss is indeed higher for interesting tasks and lower for boring tasks. They also conduct a control experiment using probabilistic finite automaton sequences, demonstrating that more complex structures correspond to higher PHi loss.\\n\\nThis is a very interesting and effective way to quantify in-context learning computation, and they further show that it can be applied to a 3B parameter pre-trained model as well.\", \"rating\": \"8\", \"confidence\": \"2\"}", "{\"title\": \"PHi Loss\", \"review\": \"The authors propose PHi Loss, which measures the model's ability to predict its own next hidden state, as a metric to identify when a sequence model is doing interesting in-context computations. The authors found that PHi Loss outperform next token prediction loss for a wide range of datasets and models.\\n\\n- Questions & Comments\\n1) Perhaps it would be better to define Phi Loss as the inverse of KL divergence, since we usually associate lower loss to be better. In Figure 3, random sequence currently has the lowest loss.\\n2) In Figure 6, it would be interesting to plot the difference between consecutive PHi loss, which would essentially loss how much interesting computation is performed in each layer. It seems like this derivative plot will look like a parabola with the peak at the middle layer. This is consistent with literature showing that most interesting computations (e.g. understanding high-level semantic concepts) are performed in the middle layer.\", \"rating\": \"8\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}" ] }
wMkUxoJ25I
Prune 'n Predict: Optimizing LLM Decision-making with Conformal Prediction
[ "Harit Vishwakarma", "Thomas Cook", "Alan Mishler", "Niccolo Dalmasso", "Natraj Raman", "Sumitra Ganesh" ]
Large language models (LLMs) are empowering decision-making in several applications, including tool or API usage and answering multiple-choice questions (MCQs). However, incorrect outputs pose significant risks in high-stakes domains like healthcare and finance. To quantify LLM uncertainty and thereby mitigate these risks, recent works employ conformal prediction (CP), a model- and distribution-agnostic framework that uses LLM outputs to generate a \emph{prediction set} containing the true answer with high probability. Leveraging CP, we propose \emph{conformal revision of questions} (CROQ), which revises the question by narrowing down the available choices to those in the prediction set and asking the LLM the revised question. We expect LLMs to be more accurate on revised questions with fewer choices. Furthermore, we expect CROQ to be effective when the prediction sets from CP are small. Commonly used logit scores often lead to large sets, diminishing CROQ's effectiveness. To overcome this, we propose CP-OPT, an optimization framework to learn scores that minimize set sizes while maintaining coverage. Our extensive experiments on MMLU, ToolAlpaca, and TruthfulQA datasets with multiple LLMs show that CROQ improves accuracy over the standard inference, with more pronounced gains when paired with CP-OPT.
[ "Large Language Models", "Conformal Prediction", "Uncertainty Quantification", "Prompting", "MCQ", "Tool Learning", "Agentic AI", "Test-time Scaling" ]
Accept
https://openreview.net/pdf?id=wMkUxoJ25I
https://openreview.net/forum?id=wMkUxoJ25I
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "dSk73UBW1G", "Zu4WzLQbLV", "Z9ADziWvOO", "4QRoFaY5K7" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741082831464, 1739738124554, 1740765476110, 1740907284272 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission23/Reviewer_bcgC" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission23/Reviewer_Tex9" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission23/Reviewer_iLGa" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review for Prune 'n Predict\", \"review\": \"Summary\\nThe paper proposes a method, Conformal Revision of Questions (CROQ), to improve LLM accuracy in MCQ settings by employing conformal prediction (CP) to eliminate distractor answers while maintaining high coverage of the correct answer. The authors propose a two-stage process wherein an initial prediction set is generated using either the LLM\\u2019s logits or a learned conformity score optimized by a method they call CP\\u2011OPT, and then the original prompt is revised to include only the answer choices in this prediction set. This revised prompt, containing fewer distractors, is subsequently used to obtain a final answer from the model.\\n\\nStrengths \\nThe paper benchmarks on a variety of MCQ benchmarks and models. \\nAblations are thorough. \\nThe paper proposes a novel CP-OPT method, proves its validity, and demonstrates empirically that the coverage probability of this method is well-calibrated, results in smaller average set sizes, and improves accuracy. The method of using conformal prediction for MCQs is not new, but the learn CP-OPT scores method is novel and the application of conformal prediction to revising MCQs is also novel. \\n\\nWeaknesses/Questions\", \"significance_of_performance_gains_unclear\": \"The t-tests give some notion of statistical significance of the improvements of the method, but it would be helpful to see CROQ compared to other well-known methods of improving MCQ performance, e.g. \\\"Let's think step by step.\\\" If using vanilla COT prompting, for instance, provides much more significant gains in MCQ accuracy than using CROQ, that would reflect poorly on the significance of this method. Conversely, if CROQ outperforms, then comparing to COT would be a strong showing for the method. There are plausibly many ways to obtain the few percentage points increase in performance observed in some settings to which this method is applied.\", \"generalization\": \"Discussions regarding the scalability and potential generalization of this method to other decision-making tasks are limited, leaving open questions about how well the approach might perform in more complex or varied contexts.\", \"choosing_the_miscoverage_parameter\": \"The authors demonstrate through ablation studies that the miscoverage parameter significantly affects the CROQ accuracy. They show that the optimal miscoverage parameter w.r.t. maximizing CROQ accuracy is task-dependent. It would be helpful to see some discussion on dynamically choosing the optimal miscoverage parameter for a given task or domain.\", \"minor_comments\": \"It's a bit odd to have the Related Works section positioned after the Experiments section. Likely fits better before Preliminaries.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Review for Submission Number 23\", \"review\": \"This paper applies the methodology of Conformal Prediction to reduce the set of choices for an LLM in an MCQ setting - the authors term this CROQ - thereby hoping to improve LLM performance, as the authors show that more \\u2018distractor options\\u2019 results in worse LLM performance, all else being held equal. Additionally, the paper introduces a method for optimization of the conformal score function - they term this CP-OPT - where the original discrete constrained optimization objective is relaxed via use of sigmoid approximations and into an unconstrained optimization with a penalty term, and the target function - g - is represented, in their case, with a 3 layer neural network.\\n\\nThe authors demonstrate that a) CP-OPT results in reduced set size relative to use of LLM logits whilst (largely - see below) maintaining coverage and b) CROQ improves LLM accuracy, and particularly so when used with CP-OPT rather than logits. They demonstrate this on 3 datasets and across 3 different models from different model families.\\n\\nOverall, the paper is well written and the methodology presented is - to the best of my knowledge - novel. The breadth of experiments conducted is also satisfactory. Although the resultant improvements of the method are often not particularly large, I still recommend that this paper is accepted - it is quite high quality by the standards of a workshop submission.\\n\\n## Strengths:\\n1. Well written and technically precise.\\n2. Methodology presented is novel, to the best of my knowledge.\\n3. The problem that the paper seeks to tackle is generally of importance.\\n\\n## Weaknesses:\\n1. The improvement of both CROQ, CP-OPT and the combination of both vs the standard baselines is not always particularly convincing. Although I agree that the direction of improvement is certainly towards the methods introduced, and is sometimes statistically significantly so, it is also sometimes not; and even in the statistically significant cases, the magnitude of improvement is often not large (e.g. though a 0.11% increase in accuracy is reported as statistically significant in Table 3, it is debatable whether this constitutes a meaningful improvement on the MCQ task in practical terms).\\n2. In some cases - such as Llama-3 coverage on TruthfulQA with 4 options in Table 1, at 92.41% - the coverage is worryingly far below 95%; it may even be statistically significantly so (the authors do not report this, as far as I understand - they only report statistical significance of the difference of coverage between logits and CP-OPT).\\n\\n## Questions:\\n1. Although equation (P1) requires that $\\\\mathcal{P}(g, \\\\tau) \\\\ge 1 - \\\\alpha$, the formulation of the objective in equation (P2) would achieve a minimum when $\\\\widetilde{\\\\mathcal{P}}(g, \\\\tau) = 1 - \\\\alpha$. Is there a reason for this? Could this lack of enforcing of the 1-sidedness of the objective contribute to the relatively frequent undercoverage of CP-OPT in Table 1?\\n2. Is there a different G trained for each dataset/task?\\n3. Could CP-OPT also be applied in non-LLM settings? If it is inspired by, or similar to, existing methods in other settings, the exact delineation and delta of the contribution should be mentioned.\\n4. I think the paper would benefit from some discussion or experiments pertaining to domain shift as that is more likely to reflect real-life usage (many settings will not have clean labelled data for calibration). For example, can CP-OPT calibrated on MMLU transfer well to TruthfulQA for CROQ, and vice versa? How does it do compared to just using logits?\\n5. Additionally, I would encourage the authors to investigate and discuss why CROQ often results in only marginal accuracy improvements (or explore why this is so in some cases and not others), especially relative to the striking differences between different numbers of distractors in Figure 1.\", \"rating\": \"8\", \"confidence\": \"4\"}", "{\"title\": \"Official Review of Submission23 by Reviewer iLGa\", \"review\": \"Summary:\\n\\nThis paper formalizes a way to use conformal prediction to improve language models\\u2019 accuracy when answering multiple choice questions (MCQs) through conformal revision of questions (CROQ). In CROQ, a subset of the options that includes the true answer with high probability, the prediction set, is generated for each problem. Then, each question\\u2019s set of options is replaced by the prediction set, resulting in a smaller set of options for some problems. This paper also describes and evaluates an approach for optimizing CROQ with CP-OPT, which reduces the size of prediction sets through optimization. Experiments show that CROQ and CP-OPT can improve language model\\u2019s performance on MCQ datasets.\", \"pros\": [\"The paper has shown that CROQ is applicable to different settings by evaluating the procedure on multiple language models and datasets.\", \"The objective and implementation of CP-OPT is presented clearly. The first two sections of the paper are easy to follow and give an intuitive understanding of the optimization problem that CP-OPT represents.\", \"In CROQ, it\\u2019s easy for a user to look at the question revisions made during CROQ and make conclusions about the options that are present or excluded in the prediction sets.\", \"The paper is well-organized and easy to follow.\"], \"cons\": [\"CP-OPT makes use of additional training for the scoring function g, but the experiments currently compare CROQ + CP-OPT with other procedures that do not require additional training. How does CROQ + CP-OPT compare to other procedures that also make use of additional training, like supervised fine tuning?\", \"The paper only analyzes CROQ and CP-OPT in terms of prediction set sizes and accuracy on problems. Are there any patterns in the prediction sets from CROQ + logits and CROQ + CP-OPT?\", \"With how CROQ and CP-OPT are currently presented, it\\u2019s not clear if they can be used on non MCQ settings.\", \"CROQ does not seem particularly original since prior works have used conformal prediction in MCQ contexts already.\"], \"rating\": \"6\", \"confidence\": \"3\"}" ] }
wM521FqPvI
Why Do Multiagent Systems Fail?
[ "Melissa Z Pan", "Mert Cemri", "Lakshya A Agrawal", "Shuyi Yang", "Bhavya Chopra", "Rishabh Tiwari", "Kurt Keutzer", "Aditya Parameswaran", "Kannan Ramchandran", "Dan Klein", "Joseph E. Gonzalez", "Matei Zaharia", "Ion Stoica" ]
Despite growing enthusiasm for Multi-Agent Systems (MAS), where multiple LLM agents collaborate to accomplish tasks, their performance gains across popular benchmarks remain minimal compared to single-agent frameworks. This gap highlights the need to analyze the challenges hindering MAS effectiveness. In this paper we conduct the first comprehensive study of challenges of MAS across 5 popular Multi-Agent Systems over 150+ tasks. We conduct an investigation with four expert human annotators studying the MAS execution traces, identifying 18 fine-grained failure modes, and propose a comprehensive failure taxonomy applicable across systems. We group these fine-grained failure modes into four key categories: (i) specification ambiguities and misalignment, (ii) organizational breakdowns, (iii) inter-agent conflict and coordination gaps, and (iv) weak verification and quality control. To understand whether these failure modes could have easily been avoided, we propose two interventions: improved agents roles specification and orchestration strategies. We find that identified failures require more involved solutions and we outline a roadmap for future research in this space. To contribute towards better development of MAS, we will open source our dataset, including the agent conversation traces and human annotations.
[ "multi-agent systems", "large language models", "llm", "compound ai systems", "agents", "ai", "tool calling" ]
Accept
https://openreview.net/pdf?id=wM521FqPvI
https://openreview.net/forum?id=wM521FqPvI
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "fNePUVTEWj", "TLGSyFM2Si", "RNZbOP9ywz" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740303757272, 1740038002003, 1741100104686 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission70/Reviewer_WLQN" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission70/Reviewer_QQsj" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Lack of analysis to separate MAS from single-agent issues\", \"review\": \"**Strengths:**\\n1. First comprehensive analysis of MAS failure modes, filling a key research gap.\\n2. MASFT offers a structured, reusable framework for classifying MAS failures.\\n\\n**Weaknesses:**\\n1. No ablation study to disentangle MAS-specific failures from single-agent LLM failures (e.g., comparing MAS performance to single-agent baselines with equivalent compute).\\n\\n2. The conclusion that tactical approaches possess severe limitations lacks grounding, as the paper does not rigorously evaluate why interventions failed to address specific failure modes (e.g., via failure mode frequency post-intervention).\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"analyze failure taxonomy for MAS, but lack of details\", \"review\": \"The article proposes a Multi-Agent System Failure Taxonomy (MASFT) to classify 18 failure modes in LLM-based multi-agent systems. It validates the taxonomy through experiments on five systems and suggests interventions to improve performance. The study highlights the need for better role design and coordination strategies to address these failures.\", \"strengths\": \"First systematic failure taxonomy for MAS, addressing a critical research gap.\\n\\nEmpirical validation across diverse systems (e.g., ChatDev, AG\\u00b2).\\n\\nProvides actionable insights for error debugging and coordination improvement.\", \"weakness_involves\": \"\", \"insufficient_detail_on_coding_process\": \"The paper mentions \\u201copen coding,\\u201d \\u201caxial coding,\\u201d and \\u201cconstant comparative analysis,\\u201d but provides minimal detail on how this was done. What were the specific coding schemes? How was inter-annotator agreement measured and ensured?\\n\\nRelying solely on human annotators for identifying \\u201cfailure modes\\u201d introduces subjectivity. \\n\\nThe intervention studies are described as \\u201cbest-effort\\u201d and \\u201ctactical.\\u201d The improvements are modest (+14% for ChatDev), and the paper itself admits they \\u201cfail to fully address MAS failures\\u201d and are \\u201cinsufficiently low for real-world deployment.\\u201d This weakens the claim that the taxonomy is actionable or leads to significant improvements.\", \"vague_definitions_of_failure_modes\": \"While 18 fine-grained failure modes are listed, their definitions in Figure 2 are very brief and potentially overlapping. More detailed descriptions and examples within the main paper (not just appendices) would improve clarity.\\n\\nThe paper strongly emphasizes \\u201corganizational understanding\\u201d and HRO principles. While interesting, it could benefit from a more balanced discussion of other potential contributing factors to MAS failures, such as:\", \"llm_prompting_sensitivity\": \"MAS performance is highly dependent on prompt design. The paper could explore how prompt variations impact failure modes.\", \"task_complexity\": \"Are the failure modes more prevalent in certain types of tasks?\", \"llm_model_choice\": \"Does the choice of LLM (GPT-4o vs. Claude-3) significantly affect the types of failures observed?\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}" ] }
wBygggbUV8
Adaptive Test-Time Intervention for Concept Bottleneck Models
[ "Matthew Shen", "Aliyah R. Hsu", "Abhineet Agarwal", "Bin Yu" ]
Concept bottleneck models (CBM) aim to improve model interpretability by predicting human level "concepts" in a bottleneck within a deep learning model architecture. However, how the predicted concepts are used in predicting the target still either remains black-box or is simplified to maintain interpretability at the cost of prediction performance. We propose to use Fast Interpretable Greedy Sum-Trees (FIGS) to obtain Binary Distillation (BD). This new method, called FIGS-BD, distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model, while maintaining the competitive prediction performance of the CBM teacher. FIGS-BD can be used in downstream tasks to explain and decompose CBM predictions into interpretable binary-concept-interaction attributions and guide adaptive test-time intervention. Across $4$ datasets, we demonstrate that our adaptive test-time intervention identifies key concepts that significantly improve performance for realistic human-in-the-loop settings that only allow for limited concept interventions.
[ "interpretable machine learning", "distillation", "test-time intervention" ]
Accept
https://openreview.net/pdf?id=wBygggbUV8
https://openreview.net/forum?id=wBygggbUV8
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "iPqdBJXGZH", "Oo2TEe5I1C", "NFgYBuiJ4J" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740564293200, 1740433454650, 1741103960202 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission117/Reviewer_TJis" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission117/Reviewer_zH9b" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Review of Enhancing CBMs Through Binary Distillation with Applications to Test-Time Intervention\", \"review\": [\"## Strengths\", \"The paper addresses a practical limitation of CBMs: the challenge of maintaining interpretability in the concept-to-target portion while preserving model performance.\", \"The empirical results demonstrate that FIGS-BD achieves over 92.5% of the teacher model's performance across all datasets, suggesting minimal performance sacrifice for interpretability gains.\", \"The evaluation across both CV and NLP domains demonstrates the versatility of the approach.\", \"## Weakness\", \"The paper does not offer a thorough theoretical comparison with other distillation or interpretable modeling methods for CBMs. Without this, it is hard to assess whether FIGS-BD offers significant theoretical advantages or if it is merely one of many viable approaches. This absence of contextualizing the contribution within the broader theoretical landscape weakens the paper's impact.\", \"While the paper references Fourier series representations of Boolean functions to justify binary representations, the analysis does not extend to formal guarantees or bounds on the approximation quality or interpretability trade-offs. It remains unclear how robust or generalizable this binary distillation is across different concept spaces.\", \"The method relies on distilling complex interactions into a set of shallow trees. However, there is no rigorous discussion on how the approach scales when the number of concepts increases. In high-dimensional settings, the combinatorial explosion of potential binary interactions might pose challenges that the current theoretical framework does not address.\", \"## Recommendations\", \"Strengthen the theoretical analysis by providing formal guarantees or bounds on the approximation quality of FIGS-BD.\", \"Include a comprehensive theoretical comparison with other interpretable distillation methods for CBMs to better position the contribution.\", \"While the paper presents a practically useful approach with promising empirical results, its theoretical foundations are insufficiently developed. The lack of comparative analysis with alternative methods and formal guarantees significantly weakens its theoretical contribution. The above suggest that the paper requires substantial theoretical strengthening before it can make a significant contribution to the field of interpretable machine learning.\"], \"rating\": \"5\", \"confidence\": \"3\"}", "{\"title\": \"Review of \\\"Enhancing CBMs Through Binary Distillation with Applications to Test-Time Intervention\\\"\", \"review\": \"### Strength\\nThis paper presents FIGS-BD, a novel approach to improving Concept Bottleneck Models (CBMs) by distilling their concept-to-target (CTT) relationships into an interpretable Fast Greedy Sum-Trees (FIGS) model. FIGS-BD improves interpretability without significantly sacrificing prediction accuracy, making CBMs more transparent for human-in-the-loop settings such as medical diagnosis. The paper also introduces adaptive test-time intervention (ATTI), which ranks critical concepts for human validation, improving model reliability. The method is evaluated on four datasets (CUB, TravelingBirds, AGNews, CEBaB) and shows that FIGS-BD achieves over 92.5% of the teacher model\\u2019s accuracy, while enhancing human interpretability and intervention effectiveness.\\n\\n### Weakness\\nThe paper lacks a detailed comparison against other interpretability approaches, such as post-hoc explainability methods or alternative model distillation techniques. Additionally, scalability concerns are not addressed\\u2014how well does FIGS-BD scale to larger models or real-world deployments? While the ATTI approach is promising, more rigorous human studies (e.g., involving real practitioners rather than simulated settings) would strengthen its impact. Finally, quantitative evaluation of interpretability is missing\\u2014how do human users perceive FIGS-BD explanations compared to traditional CBMs?\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}" ] }
viTioLnq4C
Seeds, Contexts, and Tongues: Decoding the Drivers of Hallucination in Language Models
[]
This study investigates hallucinations in Large Language Models (LLMs) during free-form text generation, particularly in Nigerian and Western contexts. We study how hyperparameters, cultural background, and prompt language (particularly, Nigerian Pidgin) affect hallucination rates. Using semantic entropy as an indicator of hallucination, we examine response variability in Llama 3.1 outputs and cluster them using the entailment model microsoft/deberta-base-mnli to identify semantic similarity. We then use these clusters to calculate semantic entropy (the variation in meanings of the LLM's responses) using a variant of Shannon entropy to quantify hallucination likelihood. Our findings shed light on ways to improve LLM reliability and consistency across linguistic and cultural situations.
[ "Language Models (LLMs)", "Hallucination Detection", "Semantic Entropy", "Natural Language Processing (NLP)", "Pidgin Language", "Cross-Lingual Analysis" ]
Reject
https://openreview.net/pdf?id=viTioLnq4C
https://openreview.net/forum?id=viTioLnq4C
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "nR1zSBq1HU", "RhHv2zqFge", "KYj0oLbyPF" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1739839423204, 1740696704671, 1741080010562 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission143/Reviewer_gLmi" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission143/Reviewer_BcXu" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Interesting ideas, but requires further work\", \"review\": \"## Summary ##\\n\\nThis paper investigates the influence of hyperparameters (seed, and temperature) and prompt language on the estimated semantic entropy. Semantic entropy is a method introduced by Kuhn et al. (2023, https://arxiv.org/abs/2302.09664) that detects hallucinations in LLMs by computing the entropy over semantically-equivalent clusters. \\n\\nOverall, I think that the research question studied is interesting, but the paper requires further work before it can be accepted. In particular, I think the experiment needs to be further developed (e.g., should include a larger sample size than 20 prompts) and the write-up needs to be improved (e.g., the wrong equation for semantic entropy is provided). \\n\\n## Strengths ##\\n\\nI think the topic chosen is interesting. The authors analyse the influence of hyperparameters (seed value, temperature) and prompt language on the estimated semantic entropy. Temperature was studied in the original semantic entropy paper (https://arxiv.org/abs/2302.09664). However, I think the investigation into seed and prompt language are interesting. \\nFrom a robustness perspective, it is important for semantic entropy to be robust to seed selection. Similarly, I think that research into the performance of hallucination detection methods on languages such as Nigerian Pigdin is important for the community. LLM research is often English-centric, and it is important that these methods perform well on non-English languages.\\n\\n## Weaknesses ## \\n\\n[1] The wrong equation for semantic entropy is provided in the paper. The authors provide the formula for entropy, which is meaningfully different. Semantic entropy estimates: \\n$$|C|^{-1} \\\\sum \\\\log p(C_i|x)$$\\n$|C|^{-1}$ doesn\\u2019t need to be the same as $p(C_i|x)$ \\u2014 and mostly likely is not. As the code is not provided, I'm not sure which equation was implemented. \\n\\n[2] Experiment: \\n- the experiment sample size is small (20 prompts), making it difficult to draw conclusions -- I would expect at least 100 prompts. \\n- the details provided are not enough to re-produce the experiments, e.g., 'the nigerian-based questions' dataset is private, which makes it difficult to understand the nature of the dataset. Examples of prompts would be useful.\\n\\n[3] Presentation and discussion of the results\\n- the presentation of the results should be improved -- the main results focus on whether there's a correlation between semantic entropy and the studied. As such, a scatter plot (perhaps with a line showing the estimated correlation) or bar plots seems more appropriate. \\n- it would be great if the authors discussed the results more. One aspect I find difficult to wrap my head around is that the seed magnitude influences the semantic entropy estimation. I think the authors should elaborate more on this result and perhaps run further experiments to explain the phenomenon. \\n\\n[4] Writing -- the authors should be more careful with the language/claims, and the paper clarity could be improved. \\nOne example where clarity could be improved is the introduction. Currently, it heavily focuses on hallucinations, and the method (and shortcomings) of duan et al. This part is repeated in the related work. However, duan et al., is not directly relevant to the work. Instead, I think it would be more appropriate to discuss multilingual LLM behaviour and robustness. \\nE.g., a possible structure for the introduction could be: \\n(1) introduce hallucinations, \\n(2) introduce hallucination detection methods, \\n(3) mention that methods predominantly focus on English text evaluation + mention inconsistencies in multi-lingual behaviour (e.g., the propensity of LLMs to answer in English, or assign higher probabilities to English tokens)\\n(4) introduce problems with robustness \\n(5) discuss paper contributions \\n(3) and (4) are currently missing from the paper, which would provide for a more natural transition into (5).\", \"a_couple_of_examples_where_the_text_could_be_improved_are\": [\"\\u201cUnlike yes/no questions that test memory\\u201d -> I'd be careful, as this is an oversimplification of yes/no classification questions\", \"\\u201cHallucination is touted to be introduced to LLMs through flaws in data, training and inference. Issues like misinformation and biases, knowledge shortcut and knowledge recall failures, architecture flaws and suboptimal training objectives, capability misalignment and belief misalignment are the go-to factors.\\u201d -> needs citations\", \"\\u201cthe diversity of LLM outputs, commonly referred to as hallucinations, may indicate the model\\u2019s creativity.\\u201d -> I briefly looked into the reference, and couldn't find this claim in Rawte et al. Rawte et al claim that \\u2018hallucinations\\u2019 are not harmful, and can be used in creative endeavours. However, I couldn't find that 'diversity of LLM outputs are commonly referred to as hallucinations'\"], \"minor_details\": [\"Use `` for the left quotation marks\", \"Contractions should be removed (don\\u2019t, isn\\u2019t, etc.)\", \"Citations are inconsistent (e.g., line 059) \\u2014 should use \\\\citet and \\\\citep\", \"Line 069 big language models -> large language models\", \"Lines 080-083: missing citations\", \"Line 142: LLM -> Llama 3.1\", \"Figure 1: The legibility could be improved -- e.g., increase font size. It's missing part of what happens when a sentence is not part of a cluster: I.e., the step:\", \"Does sentence have the same meaning as any of the existing clusters?\", \"If no -> place sentence in new cluster\"], \"rating\": \"3\", \"confidence\": \"5\"}", "{\"title\": \"Interesting Problem but Incomplete Work\", \"review\": \"The authors aim to study the impact of random seeds, various cultural contexts, as well as the language of the prompt itself, on the eventual hallucinations.\\n\\nThe problem statements tackled and the setup itself are interesting. However, the work is clearly incomplete, and makes some absurd claims instead of exploring the problem deeper.\\n\\n(a) The authors suggest that the seeds have some substantial negative correlation with semantic entropy, and thus claim that higher seeds would create lower entropy. Random seeds DO NOT have comparative value (if they do in the setup used by the authors, please clarify, because they are not supposed to). To extend the authors' claim, I ask, what would be the semantic entropy if the random seed was, say, 1e10. Similarly, what would be the semantic entropy if the random seed was, say, -1e10? Are the authors trying to say that we can reduce hallucinations by choosing a larger seed? I believe they are getting these absurd results because the experiment setup is quite small (only 20 questions from each language), but the lack of understanding of the claims being made is not good.\\n\\n(b) Most experiment discussions are limited to just measuring the correlation, with no overarching conclusions/takeaways. What do we learn from these experiments? The only takeaway that is actually present is the one that claims correlation with the random seed, which is quite absurd to me.\\n\\n(c) The paper writing itself is incomplete. There is an empty section 4.3.\\n\\nThe authors have a very impactful research question to explore, however, this is a very preliminary draft of something interesting to come. This is, unfortunately, not in a shape to be accepted.\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"comment\": \"The paper's experimental setup is limited, with an extremely small sample size (only 20 prompts), making its conclusions unreliable, especially regarding the claim of a negative correlation between random seeds and semantic entropy. Additionally, the writing is incomplete and unclear, with missing sections, incorrect equations, and insufficient discussion of results, leaving fundamental claims unsubstantiated and raising concerns about the validity of the findings.\", \"title\": \"Paper Decision\"}" ] }
vcyq2Fw3mv
Disentangling Linguistic Features with Dimension-Wise Analysis of Vector Embeddings
[ "Saniya Karwa", "Navpreet Singh" ]
Understanding the inner workings of neural embeddings, particularly in models such as BERT, remains a challenge because of their high-dimensional and opaque nature. This paper proposes a framework for uncovering the specific dimensions of vector embeddings that encode distinct linguistic properties (LPs). We introduce the Linguistically Distinct Sentence Pairs (LDSP-10) dataset, which isolates ten key linguistic features such as synonymy, negation, tense, and quantity. Using this dataset, we analyze BERT embeddings with various statistical methods, including the Wilcoxon signed-rank test, mutual information, and recursive feature elimination, to identify the most influential dimensions for each LP. We introduce a new metric, the Embedding Dimension Importance (EDI) score, which quantifies the relevance of each embedding dimension to a LP. Our findings show that certain properties, such as negation and polarity, are robustly encoded in specific dimensions, while others, like synonymy, exhibit more complex patterns. This study provides insights into the interpretability of embeddings, which can guide the development of more transparent and optimized language models, with implications for model bias mitigation and the responsible deployment of AI systems.
[ "interpretability", "embeddings", "BERT", "GPT", "LLM", "natural language processing", "linguistics", "disentanglement" ]
Accept
https://openreview.net/pdf?id=vcyq2Fw3mv
https://openreview.net/forum?id=vcyq2Fw3mv
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "w2P6JTYK94", "oId3hwvpCF", "XoV5fkQbiB", "N3DkpFwEu6" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741056213555, 1740876885102, 1741031703852, 1740399512547 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission78/Reviewer_qciE" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission78/Reviewer_a6aQ" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission78/Reviewer_kdQ1" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"The paper effectively explores how specific linguistic features are captured in a subset of embedding dimensions, but it would be even more compelling with token-level analysis and demonstrations of \\u201csteering\\u201d the model or other theories of impact\", \"review\": \"I appreciate this paper\\u2019s attempt to dissect the inner workings of sentence embeddings by zeroing in on specific linguistic properties. The authors clearly put a lot of care into building the LDSP-10 dataset and applying a neat combination of Wilcoxon, mutual information, and recursive feature elimination. It\\u2019s interesting to see how a handful of dimensions can capture changes in negation or polarity so strongly. Still, the paper could benefit from token-level analysis to complement the sentence-level approach. That deeper look might reveal local shifts in subwords or single tokens, especially when each linguistic property is introduced.\\n\\nThe writing is generally straightforward, and the approach is explained well enough to follow how LDSP-10 is constructed and how the different statistical methods fit together. \\n\\nFrom other interpretability work (e.g., those on superposition), it\\u2019s not too surprising that certain properties end up in a low-dimensional subspace. That said, focusing on minimal sentence-pair perturbations is a nice method to highlight which embeddings are doing the \\u201cheavy lifting\\u201d for each property. The authors do push forward the idea of explicitly measuring how different these dimensions are with multiple tests, which is a solid addition to existing techniques. Overall, the topic is important because knowing where specific properties live in the embedding space can help refine or debug large language models, and perhaps even mitigate biases. It would be even more significant if the authors showed how these identified dimensions might be used in a causal way\\u2014actively steering the model\\u2019s output by changing embedding values. That would demonstrate a direct application beyond analysis.\\n\\nPros and Cons\\n\\nPros\\nThere is careful dataset construction in LDSP-10, with minimal changes ensuring a specific linguistic property is isolated. The combination of Wilcoxon, mutual information, and feature elimination is also compelling, especially for confirming certain dimensions matter. The approach highlights how robust negation or polarity signals can be in a relatively small set of dimensions, which is quite informative.\\n\\nCons\\nThe paper only provides correlational evidence that a set of dimensions reflect a given property. We don\\u2019t see direct intervention experiments, like trying to move the embedding along those dimensions to see if the model changes its behavior accordingly. It also misses the token-level perspective, where local phenomena might be more nuanced. That deeper exploration of direct manipulations or a single-token lens could have made the findings more persuasive.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Interpreting Embedding Dimensions\", \"review\": \"The authors curate a dataset and framework for finding features which encode 10 key linguistic properties (LP) including Control, Synonym, Quantity, Tense, Intensifier, Voice, Definiteness, Factuality, Polarity, and Negation. The authors introduce an embedding dimension importance (EDI) metric to measure how entangled an embedding dimension is with an LP. I think the approach is interesting and the conducted experiments verify that these LPs truly exist and are encoded by embedding features. Some suggestions:\\n\\n1. It's hard to tell which LPs were originally contributed by the authors and which were existent. Could the authors make this clearer early on? Additionally include earlier discussion on how \\\"complete\\\" this list is. \\n\\n2. It is clear that LPs are very clearly entangled while others have more \\\"complex\\\" relationships. How sure can we be that these 10 LPs are viewed as linearly independent by the model? This is veering closer to mechanistic interpretability/residual streams, but some discussion on this would be interesting.\", \"rating\": \"7\", \"confidence\": \"2\"}", "{\"title\": \"Review\", \"review\": [\"### Summary\", \"This paper investigates how different linguistic properties (LPs) are encoded in specific dimensions of vector embeddings from models like BERT, GPT-2, and MPNet. The authors introduce the Linguistically Distinct Sentence Pairs (LDSP-10) dataset, designed to isolate ten key linguistic features such as synonymy, negation, tense, and quantity. Using statistical methods, including the Wilcoxon signed-rank test, mutual information, and recursive feature elimination, the paper identifies key embedding dimensions responsible for each LP. It also proposes the Embedding Dimension Importance (EDI) score, a metric for quantifying the relevance of specific dimensions to LPs. The results show that some properties, like negation and polarity, are robustly encoded in distinct dimensions, whereas others, such as synonymy, exhibit more complex patterns. The study contributes to embedding interpretability, bias mitigation, and model optimization.\", \"### Strongness\", \"Introduces the EDI score to quantify the importance of embedding dimensions for linguistic properties.\", \"Uses BERT, GPT-2, and MPNet to validate findings, showing consistency across architectures.\", \"LDSP-10 Dataset \\u2013 A well-designed dataset that isolates linguistic features, enabling fine-grained analysis of embeddings.\", \"### Weakness\", \"The connection between Wilcoxon signed-rank test, mutual information, and recursive feature elimination could be explained more clearly, especially how they complement each other in the analysis.\", \"The study does not include evaluations on LLama or other recent frontier models, so it is unclear how well the findings generalize to the latest architectures.\", \"The practical applications of the results could be further elaborated, providing more concrete insights into how this analysis can be used in real-world NLP tasks.\"], \"rating\": \"6\", \"confidence\": \"4\"}" ] }
ukb20VREPd
Fast Proxies for LLM Robustness Evaluation
[ "Tim Beyer", "Jan Schuchardt", "Leo Schwinn", "Stephan Günnemann" ]
Evaluating the robustness of LLMs to adversarial attacks is crucial for safe deployment, yet current red-teaming methods are often prohibitively expensive. We compare the ability of fast proxy metrics to predict the real-world robustness of an LLM against a simulated attacker ensemble. This allows us to estimate a model's robustness to computationally expensive attacks without requiring runs of the attacks themselves. Specifically, we consider gradient-descent-based embedding-space attacks, prefilling attacks, and direct attacks. Even though direct attacks in particular do not achieve high ASR, we find that they and embedding-space attacks can predict attack success rates well, achieving $r_p=0.86$ (linear) and $r_s=0.97$ (Spearman rank) correlations with the full attack ensemble while reducing computational cost by three orders of magnitude.
[ "LLM Robustness", "Red-Teaming" ]
Accept
https://openreview.net/pdf?id=ukb20VREPd
https://openreview.net/forum?id=ukb20VREPd
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "yTY7a4bK5o", "u4kOFJ53kq", "feImwKE4VQ", "YmFde6tQDQ" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740899527360, 1741056286741, 1740870188062, 1740871447119 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission83/Reviewer_HzLC" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission83/Reviewer_TM7b" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission83/Reviewer_kNqG" ] ], "structured_content_str": [ "{\"title\": \"Effective Proxy Metrics for Efficient LLM Robustness Evaluation\", \"review\": \"review\\n\\n\\nThe paper presents a systematic investigation into the efficacy of low-cost proxy methods\\u2014direct prompting, prefilling attacks, and embedding-space attacks\\u2014for predicting the robustness of large language models (LLMs) against computationally intensive adversarial red-teaming. By leveraging a synthetic red-teamer ensemble comprising six distinct attack algorithms and evaluating over 7 million jailbreak attempts across 33 open-source models, the authors demonstrate that proxy metrics exhibit strong correlations with full attack ensemble outcomes. \\n\\nQuality\\n\\nThe experimental framework is rigorous, encompassing diverse model families (Llama, Mistral, Phi, etc.) and evaluation scenarios (within-family, cross-family, and robustness fine-tuning). The use of standardized metrics (Pearson/Spearman correlations) and large-scale datasets (300 harmful prompts from AdvBench) ensures statistical validity. The inclusion of multiple checkpoints during adversarial training (e.g., Circuit Breaker and Continuous Adversarial Training) adds depth to the analysis. However, the study\\u2019s exclusive focus on sub-10B parameter models limits insights into the generalizability of proxy methods to larger architectures (e.g., Llama-3-70B).\\n\\nClarity\\n\\nThe manuscript is well-organized, with a logical progression from problem definition (high costs of red-teaming) to methodology and results. Figures such as Figure 2 (cross-family model correlations) and Figure 4 (scaling trends with prompt count) effectively visualize key findings. The appendix provides additional technical details, though some sections (e.g., attack algorithm descriptions) could benefit from conciseness.\\n\\nOriginality\\n\\nThis work addresses a critical gap in LLM safety evaluation by proposing cost-effective alternatives to resource-intensive red-teaming. Key innovations include the first systematic validation of proxy methods, revealing that direct prompting\\u2014despite its simplicity\\u2014serves as a strong baseline for ranking model robustness. Additionally, the study demonstrates the utility of proxy metrics for cross-family robustness prediction and dynamic monitoring of robustness improvements during adversarial fine-tuning.\\n\\nSignificance\\n\\nThe findings have substantial practical implications for LLM safety research. By reducing evaluation costs by ~1,000x, proxy methods democratize robustness assessments for resource-constrained teams. The correlation between proxy metrics and adversarial training trajectories (e.g., Circuit Breaker training) offers actionable insights for model alignment strategies.\\n\\nStrengths\\n\\nThe study\\u2019s strengths lie in its scale and methodological rigor, spanning 33 models, 6 attack algorithms, and 7 million jailbreak attempts. Direct prompting emerges as a particularly practical baseline due to its simplicity and accessibility. The multi-scenario analysis\\u2014encompassing within-family comparisons, cross-family generalization, and training-phase monitoring\\u2014underscores the versatility of proxy methods.\\n\\nWeaknesses\\n\\nLimitations include the focus on sub-10B parameter models, which raises questions about the generalizability of proxies to larger architectures. The synthetic attack ensemble, while comprehensive, lacks diversity in newer attack paradigms (e.g., multi-turn or multimodal jailbreaking). Direct prompting\\u2019s sensitivity to prompt count (>50 required for stable correlations) may hinder its applicability in low-resource settings. Furthermore, embedding-space attacks rely on white-box assumptions, diverging from real-world black-box threat models.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"This paper explores fast proxy methods for LLM robustness but lacks broader validation across models and attack techniques\", \"review\": \"## Summary\\nThis paper explores computationally efficient proxy methods for assessing the robustness of large language models (LLMs) against adversarial attacks. The key claim is that these proxies\\u2014direct prompting, prefilling, and embedding-space attacks\\u2014can serve as reliable indicators of a model\\u2019s susceptibility to adversarial red-teaming without requiring costly attack methods. The authors construct a strong synthetic red-teamer by combining six adversarial attack algorithms and use this as a benchmark to evaluate the proxy methods across 33 open-source models.\\n\\nTheir findings suggest that while direct prompting does not achieve high absolute attack success rates (ASR), it correlates well with the more expensive attack ensemble when a sufficient number of prompts (>50) are used. Embedding-space attacks demonstrate a stronger correlation with ensemble methods in low-prompt settings but require white-box access, making them impractical for many real-world applications. Prefilling attacks, which prepend adversarial content to a model\\u2019s output, generally perform worse than the other two approaches.\\n\\n## Strengths\\n- Broad Model Selection \\u2013 The study covers a diverse set of models spanning multiple families (Llama, Mistral, Qwen, Phi, etc.), offering valuable insights into robustness trends both within and across model families.\\n- Computational Efficiency \\u2013 The proposed proxy methods reduce evaluation costs by several orders of magnitude compared to traditional red-teaming, making them attractive for large-scale safety assessments.\\n- Scalability Analysis \\u2013 The authors examine how the effectiveness of different proxy methods scales with the number of prompts, providing practical guidance for researchers balancing accuracy with efficiency.\\n- Trade-Offs Between Methods \\u2013 The comparative analysis highlights the strengths and limitations of each proxy approach, particularly in terms of feasibility in real-world attack scenarios.\\n\\n## Weaknesses\\n- Limited Exploration of Proxy Methods \\u2013 The study evaluates only three proxy techniques, while other techniques (e.g., adaptive adversarial prompting [1], reinforcement learning-based attackers [2]) could provide additional insights.\\n- Justification for Direct Prompting\\u2019s Suitability \\u2013 While the authors conclude that direct prompting is a strong baseline, their own data show that embedding-space attacks correlate better with the expensive red-teaming ensemble (particularly within model families). The preference for direct prompting is argued on the basis of real-world feasibility, but this could be better substantiated.\\n- Focus on Small-Scale Models \\u2013 The study is limited to models under 10B parameters, leaving open the question of whether these proxy methods would generalize to frontier-scale LLMs, where adversarial robustness is a more pressing issue.\\n- Terminology and Clarity \\u2013 The paper introduces several model categorization terms (e.g., \\\"instruct,\\\" \\\"safety-tuned,\\\" \\\"adversarially trained,\\\" \\\"circuit breaker,\\\" and \\\"capability-optimized\\\") without clearly defining or referencing them. Additionally, technical terms like PGD and ASR are used without definition, making it harder for readers unfamiliar with adversarial attack literature to follow the work. Also, while the correlation coefficients are well presented, some visualizations do not effectively communicate key insights, particularly regarding the relative strengths of different proxy methods.\\n\\n## Overall Evaluation\\nThe paper addresses an important problem in adversarial robustness evaluation by proposing faster, cheaper alternatives to traditional red-teaming. Its broad empirical validation is a strength, but the analysis could be more comprehensive, particularly in justifying the ranking of proxy methods and expanding to larger models. Future work should expand to larger models and include additional proxy techniques to provide a more comprehensive picture of lightweight adversarial robustness testing.\\n\\n## References\\n[1] Paulus, A., Zharmagambetov, A., Guo, C., Amos, B., & Tian, Y. (2024). Advprompter: Fast adaptive adversarial prompting for llms. arXiv preprint arXiv:2404.16873.\\n[2] Wang, X., Peng, J., Xu, K., Yao, H., & Chen, T. (2024, August). Reinforcement learning-driven llm agent for automated attacks on llms. In Proceedings of the Fifth Workshop on Privacy in Natural Language Processing (pp. 170-177).\", \"rating\": \"5\", \"confidence\": \"2\"}", "{\"title\": \"Review: Fast Proxies for LLM Robustness Evaluation\", \"review\": \"The paper \\\"Fast Proxies for LLM Robustness Evaluation\\\" proposes a computationally efficient framework for evaluating the robustness of LLMs to adversarial attack. The authors use relatively inexpensive proxies to estimate LLM robustness, which include direct prompting, embedding-space attacks, and pre-filling. The authors demonstrate that these proxy methods can predict ASRs with high accuracy, offering a promising approach for scaling robustness evaluations. The experiments cover a wide range of LLMs, providing a comprehensive set of results. Slightly concerning is the lack of analysis into false negatives/positives which could skew results. While the correlation results between proxies and full attack success rates are compelling, the explanation of this phenomena is not well covered. More insight into the underlying mechanisms of the proxies would help understanding adversarial robustness in LLMs. The paper is clearly written, and the work is of significant practical importance - the study's findings could significantly reduce the cost of adversarial testing in LLMs, making it more feasible to conduct safety evaluations.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
u61yT9ZkEZ
On-Premises LLM Deployment Demands a Middle Path: Preserving Privacy Without Sacrificing Model Confidentiality
[ "Hanbo Huang", "Yihan Li", "Bowen Jiang", "Lin Liu", "Bo Jiang", "Ruoyu Sun", "Zhuotao Liu", "Shiyu Liang" ]
Current LLM customization typically relies on two deployment strategies: closed-source APIs, which require users to upload private data to external servers, and open-weight models, which allow local fine-tuning but pose misuse risks. In this paper, we argue that (1) deploying closed-source LLMs within user-controlled infrastructure (*on-premises deployment*) enhances data privacy and mitigates misuse risks, and (2) a well-designed on-premises deployment must ensure model confidentiality---by preventing model theft---and offer privacy-preserving customization. Prior research on small models has explored securing only the output layer within hardware-secured devices to balance confidentiality and customization efficiency. However, we show that this approach is insufficient for defending large-scale LLMs against distillation attacks. We therefore introduce a {semi-open deployment framework} that secures only a few, carefully chosen layers, achieving distillation resistance comparable to fully secured models while preserving fine-tuning flexibility. Through extensive experiments, we show that securing bottom layers significantly reduces functional extraction risks. Our findings demonstrate that privacy and confidentiality can coexist, paving the way for secure on-premises AI deployment that balances usability and protection.
[ "LLM on-premises deployment", "deployment security" ]
Accept
https://openreview.net/pdf?id=u61yT9ZkEZ
https://openreview.net/forum?id=u61yT9ZkEZ
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "gdKpuRjuCM", "P4xdiQpHRF", "8ecpz8KgGB", "5YHgveAWQL" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740751293951, 1741109361508, 1740821511034, 1740898235409 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission31/Reviewer_iPZD" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission31/Reviewer_CAPj" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission31/Reviewer_1trD" ] ], "structured_content_str": [ "{\"title\": \"A Semi-Open Framework for Secure Model Customization\", \"review\": \"# Review\\nThis paper authors propose a\\u00a0semi-open deployment framework\\u00a0that protects certain key layers (typically the bottom layers) of the model to prevent\\u00a0distillation attacks\\u00a0while allowing fine-tuning of the unprotected parts. The framework introduces a novel metric called\\u00a0Distillation Difficulty (DD)\\u00a0to evaluate the security of the model under different protection strategies. \\n\\n## Strengths: \\n1. New Framework:\\u00a0The paper proposes a semi-open deployment framework that offers a middle ground between fully closed-source APIs and open-weight models, addressing both privacy and misuse risks.\\n2. Transition layer and DD: The paper proposeds transition layer , which is used to determine whether specific layers should be protected.And proposes\\u00a0Distillation Difficulty (DD)\\u00a0metric to evaluate the security of the model under specific protection strategies.\\n3. Experiments: Experiments were conducted on five models using three distillation attack strategies. The performance of the models was evaluated across six downstream tasks, and the\\u00a0ADR\\u00a0was used to measure the security of the models.\\n\\n## Weakness\\nWhile the validity has been verified on large language models, there are currently other types of models, such as diffusion models, T5/GLM architectures, etc. Whether this framework is effective for different architectures still needs to be validated.\", \"rating\": \"8\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"comment\": \"The paper is highly relevant to the workshop, as it directly addresses the challenge of trustworthy AI deployment by ensuring privacy-preserving customization while preventing model theft. The reviewers commend the paper\\u2019s rigorous theoretical foundation, extensive empirical validation across multiple models and attack strategies, and well-structured presentation.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review to On-Premises LLM Deployment Demands a Middle Path\", \"review\": \"This paper presents a principled approach to deciding which layers of black-box on-premises deployed LLMs to secure in order to balance its customizability with its protection against model distillation. The authors prove that a minimal layer exists up to which all layers should be secured to ensure distillation robustness and propose a principled, efficient method to discover it. They confirm their experimental results with extensive experimental evaluation.\\n\\nI think this paper presents a very principled approach to the proposed topic and confirms their approach with strong and convincing results. While the small targeted topic appears to be very specific and potentially not too relevant in practice yet, I think this work is a valuable contribution to making on-premises deployment a feasible option in the future.\\n\\nI did not check the proof for Theorem 1 in detail, which proves that the proposed method is guaranteed to be applicable for any LLM.\", \"i_only_have_a_few_minor_nitpicks\": [\"While very legible on colored screens, Figure 3-6 are uninterpretable on black-and-white screens/printouts.\", \"The \\\"Secured Ratio\\\" in Table 1 is never introduced, I assume from context it refers to the \\\"number of secured layers\\\".\"], \"rating\": \"8\", \"confidence\": \"3\"}", "{\"title\": \"Review of SOLID\", \"review\": \"## Summary\\nThis paper addresses the challenge of securely deploying large language models (LLMs) in privacy-sensitive environments by proposing SOLID, a semi-open framework that selectively secures a minimal set of layers to balance model confidentiality (preventing distillation attacks) and customization flexibility. The authors demonstrate that securing bottom layers of transformers provides robust security against distillation attacks while preserving fine-tuning capabilities, offering a practical solution for on-premises AI deployment in regulated industries.\\n\\n## Strengths\\n1. Rigorous Methodology:\\nThe paper employs a robust combination of theoretical analysis and extensive empirical validation, testing SOLID across multiple models (e.g., Llama2-70B, Mistral-7B) and distillation strategies. This thorough approach strengthens the credibility of the findings.\\n\\n2. Well-Organized and Clear Explanations:\\nThe paper is logically structured, with clear sections that guide the reader through the problem formulation, methodology, and experiments. Key concepts, such as the transition layer and distillation difficulty, are explained in an accessible manner.\\n\\n3. Novel Framework and Theoretical Contribution:\\nSOLID is not the first to propose a partially open approach, but it advances prior work by Introducing a theoretically grounded method for selecting which layers to secure and demonstrating that securing bottom layers is more effective against distillation attacks .\\n\\n\\n## Weaknesses\\n1. While SOLID reduces the number of secured layers compared to fully secured models, the paper does not thoroughly discuss the computational cost of implementing hardware-secured environments (e.g., TEEs) for these layers.\", \"rating\": \"8\", \"confidence\": \"3\"}" ] }
tpiLrjgcnF
Token-Level Adversarial Prompt Detection Based on Perplexity Measures and Contextual Information
[ "Zhengmian Hu", "Gang Wu", "Saayan Mitra", "Ruiyi Zhang", "Tong Sun", "Heng Huang", "Viswanathan Swaminathan" ]
In recent years, Large Language Models (LLMs) have emerged as pivotal tools in various applications. However, these models are susceptible to adversarial prompt attacks, where attackers can carefully curate input strings that mislead LLMs into generating incorrect or undesired outputs. Previous work has revealed that with relatively simple yet effective attacks based on discrete optimization, it is possible to generate adversarial prompts that bypass moderation and alignment of the models. This vulnerability to adversarial prompts underscores a significant concern regarding the robustness and reliability of LLMs. Our work aims to address this concern by introducing a novel approach to detecting adversarial prompts at a token level, leveraging the LLM's capability to predict the next token's probability. We measure the degree of the model's perplexity, where tokens predicted with high probability are considered normal, and those exhibiting high perplexity are flagged as adversarial. Additionaly, our method also integrates context understanding by incorporating neighboring token information to encourage the detection of contiguous adversarial prompt sequences. To this end, we design two algorithms for adversarial prompt detection: one based on optimization techniques and another on Probabilistic Graphical Models (PGM). Both methods are equipped with efficient solving methods, ensuring efficient adversarial prompt detection. Our token-level detection result can be visualized as heatmap overlays on the text sequence, allowing for a clearer and more intuitive representation of which part of the text may contain adversarial prompts.
[ "adversarial prompt detection", "llm security" ]
Accept
https://openreview.net/pdf?id=tpiLrjgcnF
https://openreview.net/forum?id=tpiLrjgcnF
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "xs83TFpIxe", "n8Dp13W7H4", "Tae5upEQOC", "LsUFhbBo7g" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740222154492, 1740398334829, 1741080261646, 1740905210643 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission111/Reviewer_VbD3" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission111/Reviewer_gg8Z" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission111/Reviewer_nMvx" ] ], "structured_content_str": [ "{\"title\": \"PPL-based detection method for adversarial prompt attacks\", \"review\": \"This paper proposes two methods for detecting adversarial prompt attacks in LLMs, one for binary classification (attacked/not attacked) and one more fine-grained. Both rely on the perplexity/probability of tokens to identify an attacked prompt.\", \"strengths\": \"The paper is clearly written overall.\", \"weaknesses\": \"The authors state that existing methods focus on detecting prompt attacks at the sequence level only, while their work is the first to focus on token-level detection. On the other hand, the perplexity of individual tokens is considered together with the perplexity of their surrounding tokens, since -- as the authors themselves state -- only looking at individual token perplexity might inaccurately flag rare tokens as attacks. It is unclear to this reviewer what the strategies proposed here achieve that is not also achieved by established methods, such as the perplexity sliding window proposed in [1]. Sliding windows of different token lengths (up to a sliding window of length=1) can capture different granularities, and this method measures both the severity of the PPL increase and also lends itself to binary classification (using a threshold).\\n\\n[1] (Jain et al., 2023) Baseline defenses for adversarial attacks against aligned language models\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"New perplexity-based design for detecting adversarial prompts, but narrowly focuses on GCG...\", \"review\": \"The paper proposes a token-level perplexity-based method to detect adversarial prompts. In particular, it focuses on the class of GCG-produced adversarial prompts (Zou et al., 2023) and integrates the contextual information in the design to avoid high false positive rates. The interesting part of this paper is the new probabilistic model for modeling the perplexity of each token, which formulates a new optimization problem with regularization terms of contextual coherence among adjacent tokens (Equation 1 and Equation 2).\\n\\nThe experimental results seem promising. However, given the existing broad literature on jailbreak attacks on LLMs, the paper narrowly focuses on GCG, a very early approach known to generate adversarial suffixes with high perplexity. I view this as a major positioning issue of this paper. The question is whether the proposed detection method is generalizable to achieve good performance for other types of jailbreak attacks. It is worth noting that there are automatic jailbreak attacks, like AutoDAN [1], PAIR [2], and many other manual methods that handcraft \\\"natural\\\" adversarial prompts, achieving much lower perplexity compared with GCG. The authors should apply their detection method to these stealthier jailbreak attacks for more comprehensive evaluations.\\n\\n[1] Zhu et al., AutoDAN: Interpretable Gradient-Based Adversarial Attacks on Large Language Models, https://arxiv.org/pdf/2310.15140\\n\\n[2] Dobriban et al., Jailbreaking Black Box Large Language Models in Twenty Queries, https://arxiv.org/pdf/2310.08419v3\", \"rating\": \"5\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"comment\": \"The paper presents a novel token-level adversarial prompt detection approach but lacks sufficient validation and comparison with existing methods. Its primary limitation is the narrow focus on GCG-based adversarial prompts without evaluating stealthier jailbreak techniques like AutoDAN or PAIR, which could challenge its generalizability.\\nAdditionally, while the authors emphasize token-level detection, they do not convincingly demonstrate its advantages over sequence-level methods or established sliding window techniques. The absence of a validation set and limited discussion on hyperparameter transferability further weaken the study\\u2019s real-world applicability. Addressing these issues would significantly strengthen the paper's contributions.\", \"title\": \"Paper Decision\"}", "{\"title\": \"A good paper with a few areas for improvement\", \"review\": \"# Summary\\nThe paper introduces a novel approach for detecting adversarial prompts in Large Language Models (LLMs) at the token level. The authors propose methods that leverage both token perplexity and contextual information to identify specific tokens that are likely part of adversarial inputs.\", \"their_technical_contribution_consists_of_two_algorithms\": [\"1. A discrete optimization approach\", \"2. A probabilistic graphical model\", \"# Strengths\", \"**Novel contribution:** The authors present a new and non-trivial adversarial prompt detection method, which they test on a range of models and performance metrics related to both sequence- and token-level adversarial prompt detection.\", \"**Clarity of writing and presentation:** The paper is well-structured and clearly written. The methodology is explained in a logical progression, and the mathematical formulations are presented with sufficient detail.\", \"**Practicality:** The authors highlight that their methods can be implemented with relatively limited computational demands and hardware requirements. They demonstrate that their algorithms can be implemented in linear time and note that even GPT-2-small, which can be run on a CPU, achieved perfect results at the sentence-level detection task.\", \"# Weaknesses\", \"**Lack of validation set:** While the authors performed a grid search to determine optimal hyperparameter values for different models, they did not evaluate the performance of these hyperparameters on a separate validation set or investigate how well they transfer to significantly different datasets. This is an important consideration in assessing this method's usefulness for real-world applications.\", \"**Limited comparison to previous work:** The paper presents impressive results with their methods achieving perfect classification performance at the sequence level. However, it would be informative if they compared their results to existing sequence-level adversarial detection methods. Does their approach represent a significant improvement?\", \"**Insufficient motivation for token-level detection:** The authors acknowledge that perplexity-based methods for sequence-level detection of adversarial prompts are well-established. However, they provide limited motivation for why token-level detection is uniquely useful or important. The paper would be strengthened by including concrete use cases where identifying specific adversarial tokens provides practical advantages over sequence-level detection.\", \"**Typos:** (line 024) Additionaly -> Additionally, (line 142) produce -> produces, (Table 3) Performance Performance -> Performance.\", \"# Overall Assessment\", \"This paper makes a valuable contribution to the field of LLM security by introducing token-level adversarial prompt detection methods. While the technical approach is sound and the results are promising, addressing the noted weaknesses would strengthen the work considerably. In particular, more rigorous validation and better motivation for token-level over sequence-level detection would help establish the usefulness of this approach.\"], \"rating\": \"7\", \"confidence\": \"3\"}" ] }
tURn0wNI3s
Benchmarking Intent Awareness in Prompt Injection Guardrail Models
[]
Prompt injection remains a major security risk for large language models, enabling adversarial manipulation via crafted inputs. Various prompt guardrail models have been developed to mitigate this threat, yet their efficacy in intent-aware adversarial settings remains underexplored. Existing defenses lack robustness in intent-aware adversarial settings, often relying on static attack benchmarks. We introduce a novel intent-aware benchmarking framework that despite taking very few contextual examples as input, diversifies adversarial inputs and assesses over-defense tendencies. Our experiments reveal that current prompt injection guardrail models suffer from high false negatives in adversarial cases and excessive false positives in benign scenarios, highlighting critical limitations.
[ "guardrail", "prompt injection", "security", "llm application" ]
Reject
https://openreview.net/pdf?id=tURn0wNI3s
https://openreview.net/forum?id=tURn0wNI3s
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "wrCsIHclOi", "awPD4CFBaW", "SB7T9mlGb8" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740496768422, 1740896640116, 1741057431143 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission136/Reviewer_gtpA" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission136/Reviewer_KjeJ" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Novel Pipeline and Datasets to Address Benign Prompt Injection Identifications; Lacking in Clarity, Organization, and Clear & Wide Usability/Significance\", \"review\": \"This workshop paper highlights a key issue in prompt injection LLM defense: benign examples can lead to high FPR, and on intent-aware datasets, can exhibit both high FPR and FNR. Perhaps, in non-intent-aware settings, this issue is more or less alleviated. The authors propose a new datasets that can help correct and analyze FP/FN behaviors in LLMs, introduce a new framework for generating challenging tough prompts, and introduce a prompt injection defense LLM with original and introduced datasets that performs better on benchmarks. Overall, I find this paper to be very hard to parse due to the formatting of sentences and paragraphs (ie using an acronym before it is introduced), walls of text that introduce a bunch of variables/notation, and lack of clarity in description. It appears this paper highlights an issue and addresses it with the datasets and model, but it is not clear how much improvement this model brings as it only has results for one dataset (PPC). Intent-Inject IRS Analysis is in a paragraph rather than incorporated into the table, which is also hard to read (as described about clarity earlier). In summary:\", \"pros\": [\"Table 1 gives good intuition on the difference between I-I and N-I-I\", \"Novel datasets / pipeline can help bridge gap between extreme FPR/FNRs with benign examples, which can potentially happen frequently in real life\", \"Proposed model performs better across one dataset\"], \"cons\": [\"Pretty hard to parse paper (notes above) with notations, setup, results, etc.\", \"It would be nice to have more setup on why this is important IRL - for example, these benign examples can happen very frequently in LLM use cases or education, cannot be so extreme to label them as adversarial injections\", \"Only show improvement across only 1 dataset\"], \"rating\": \"5\", \"confidence\": \"3\"}", "{\"title\": \"The paper presents a framework for evaluating intent-aware prompt injection attacks in Python chatbots and airline booking assistants, showing improved performance with their model, but lacks comprehensive comparisons and broader domain evaluations.\", \"review\": \"The paper provides a systematic framework for evaluating intent-aware prompt injection attacks and conducts experiments with existing guardrails across two domains: Python programming chatbots and airline booking assistants. The authors propose and evaluate their own dataset and model, showing improved performance.\\n\\nHowever, the paper could be strengthened by including more examples and illustrations comparing the author\\u2019s trained model with other models.\\n\\nA limitation of the paper is its focus on only two specific application domains (PPC and FBA). It would have been beneficial to use additional domains and safety benchmarks for more robust evaluations.\\n\\nAdditionally, for the comparison in Table 2, significant models such as LlamaGuard, WildGuard, and other existing LLMs are missing. For example, how would the existing small Llama-3.1/2-1/3/8B-Instruct model perform using prompt engineering for the given task compared to the author's trained model?\\n\\nI believe the paper needs to be strengthened more at this point.\", \"rating\": \"4\", \"confidence\": \"3\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}" ] }
swRxS7s4rB
Interpretable Steering of Large Language Models with Feature Guided Activation Additions
[ "Samuel Soo", "Wesley Teng", "Chandrasekaran Balaganesh", "Tan Guoxian", "Ming YAN" ]
Effective and reliable control over Large Language Model behavior is a significant challenge. While activation steering methods, which add steering vectors to a model's hidden states, are a promising approach, existing techniques often lack precision and interpretability in how they influence model outputs. We introduce Feature Guided Activation Additions (FGAA), a novel activation steering method that leverages insights from Contrastive Activation Addition (CAA) and Sparse Autoencoder-Targeted Steering (SAE-TS). By operating in the latent space of a Sparse Autoencoder (SAE) and employing optimization techniques to select desired SAE features, FGAA constructs precise, human-interpretable steering vectors that provide better steering effects while maintaining coherence of steered model outputs. In this regard, evaluations on Gemma-2-2B and Gemma-2-9B models across various steering tasks demonstrate that FGAA outperforms existing steering methods of CAA, SAE decoder steering, and SAE-TS. Our results also highlight important trade-offs between steering scale and general model capabilities that are consistent across all tested steering methods.
[ "interpretability and explainable AI", "activation steering", "sparse autoencoders", "mechanistic interpretability applications" ]
Accept
https://openreview.net/pdf?id=swRxS7s4rB
https://openreview.net/forum?id=swRxS7s4rB
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "ouWimUqsp7", "YyezLmmk8D", "NTT8wzOjyc", "1TdBFWesyi" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740839395849, 1740821978860, 1741103305860, 1740656409398 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission11/Reviewer_x7Ma" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission11/Reviewer_rayq" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission11/Reviewer_2gQ6" ] ], "structured_content_str": [ "{\"title\": \"Review of \\\"Interpretable Steering of Large Language Models with Feature Guided Activation Additions\\\"\", \"review\": [\"**Summary:**\", \"The paper introduces Feature Guided Activation Additions (FGAA), a proposed method for steering large language models (LLMs) by modifying their internal activations. FGAA aims to combine and improve upon two existing approaches: Contrastive Activation Addition (CAA) (Panickssery et al., 2023) and SAE-Targeted Steering (SAE-TS) (Chalnev et al., 2024). CAA calculates steering vectors by contrasting activations from prompts exhibiting a desired behavior with those exhibiting an undesired (or opposite) behavior. SAE-TS, on the other hand, learns a linear mapping between steering vectors and their effects on the model's output, measured as changes in SAE feature activations. SAE-TS then use (inverse of) the mapping to get the steering vector to achieve the desired effect. FGAA attempts to integrate these two: it starts with the CAA method to initially form a effect vector (termed steering vector in the original of CAA framework), then get the steering vector using the SAE-TS approach (by applying the inverse of the mapping). The authors also propose feature filtering steps (density filtering and BOS feature removal) and a hyperparameter search ($n_1$, $n_2$) to further refine the steering vector.\", \"**Strengths:**\", \"**Algorithmic Contributions:** The specific implementation of the steering vector construction, including the filtering steps and the use of the effect approximators represent a novel contribution.\", \"**Empirical Evaluation:** The paper conducts evaluations using the same framework as Chalnev et al. (2024), comparing FGAA to CAA, SAE feature steering, and SAE-TS on several steering tasks. The results suggest that FGAA can outperform these baselines in some cases.\", \"**Investigates General Capabilities:** The authors analyze the impact of steering on model perplexity and performance on MMLU and MMLU-Pro benchmarks, which is important for understanding the trade-offs between steering and general language model capabilities.\", \"**Ablation and Hyperparameter Sweep:** The paper explores the influence of the $n_1$ and $n_2$ hyperparameters and provides other ablation. These discussions are diligent and insightful.\", \"**Weaknesses:**\", \"**Ethical Concerns: Textual Similarity:** The paper exhibits substantial textual similarity to Chalnev et al. (2024), particularly in the introduction. While Chalnev et al. is cited, **the flow is almost identical** and it seems like text recycling, raising concerns about proper attribution and potential plagiarism. Several examples demonstrate this problem:\", \"1. **Opening Sentences / Motivation:**\", \"**Chalnev et al.:** \\\"There are widespread calls for better control of the behaviour of Large Language Models (LLMs; e.g. The White House (2023)). Current methods such as prompting (Wallace et al., 2024) and finetuning (Ouyang et al., 2022, Chung et al., 2022) offer some degree of control, but have clear limitations.\\\"\", \"**This paper:** \\\"Concerns are growing about effective, and reliable control of the behaviour of Large Language Models (LLMs), and they are being increasingly recognized. Conventional methods such as prompting (Wallace et al., 2024) and fine-tuning (Ouyang et al., 2022) provide a bit of control, yet they unfortunately have many important limitations that users must consider.\\\"\", \"2. **Limitations of Prompting/Fine-tuning:**\", \"**Chalnev et al.:** \\\"For example, prompting can be fragile and is often susceptible to methods that can subvert these instructions (Wei et al., 2023). Finetuning a model can be more robust but requires a curated dataset for training which can be both expensive and time-consuming to produce (e.g. Dubey et al. (2024)).\\\"\", \"**This paper:** \\\"Prompting is often weak and open to manipulation, but fine-tuning needs a lot of computing power and well-organized data.\\\"\", \"3. **Introducing Steering Vectors:**\", \"**Chalnev et al.:** \\\"*Steering vectors* (Turner et al., 2024) have the potential to be more robust than prompting, and both cheaper and easier to implement than finetuning. Steering vectors work by adding activations to the hidden state of a model, part way through the forward pass (Section 3).\\\"\", \"**This paper:** \\\"A promising alternative is offered by activation steering, providing a stronger and more efficient approach than prompting and fine-tuning. This method adds steering vectors to the model\\u2019s hidden states and this influences its behavior during the forward pass.\\\"\", \"4. **Problem of Unpredictability:**\", \"**Chalnev et al.:** \\\"However, a problem with current steering methods is their unpredictability \\u2013 it\\u2019s often unclear exactly how a steering vector will affect model behavior. Steering vectors may not produce the intended changes in the model\\u2019s output or may cause unforeseen behaviors, as we discuss in Section 3.\\\"\", \"**This paper:** \\\"...existing activation steering methods often miss one of precision, reliability and interpretability, leading to unintended model changes and poor output quality.\\\"\", \"**Misleading Branding as a Contrastive Method:** The paper frames FGAA as building on *contrastive* activation addition (CAA). However, the hyperparameter sweep reveals that the authors consistently set `n_2 = 0`. This means they *completely discard* the \\\"negative\\\" or contrastive component of the steering vector, effectively eliminating the contrastive aspect. The method, as implemented and evaluated, is *not* contrastive. This is a misrepresentation of the method.\", \"**Potential Lack of Novelty and Generalizability:** Because the method is not truly contrastive, its primary distinction from SAE-TS lies in averaging the SAE features of positive examples, rather than directly using the SAE feature's decoder vector (a single SAE feature). One may hypothesize that this is beneficial due to \\\"feature splitting\\\" (Chanin\\u00a0*et al.*, 2024), where a single concept might be represented by multiple SAE features. However, this raises a question:\", \"**SAE-Specificity:** Is this benefit unique to SAEs exhibiting significant feature splitting? If a SAE with less splitting were used, would FGAA still offer an advantage over SAE-TS? The paper does not address this, limiting the generalizability of the findings.\", \"**Task-Specific Prompt Engineering and Hyperparameter Tuning:** The evaluation relies on manually crafted prompts and task-specific hyperparameter tuning (finding optimal `n_1` values). This significantly limits the practicality and generalizability of the method. It's unclear how FGAA would perform on new tasks without extensive manual effort. The paper lacks a general, task-agnostic approach.\", \"**Additional Note**\", \"I would consider raising the rating if the ethical concern is deemed unfounded.\", \"**References**\", \"Chanin, D., Wilken-Smith, J., Dulka, T., Bhatnagar, H., & Bloom, J. (2024). *A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders*. arXiv preprint arXiv:2409.14507.\", \"Chalnev, S., Siu, M., & Conmy, A. (2024). *Improving Steering Vectors by Targeting Sparse Autoencoder Features*. arXiv preprint arXiv:2411.02193.\", \"Panickssery, N., Gabrieli, N., Schulz, J., Tong, M., Hubinger, E., & Turner, A. M. (2023). *Steering Llama 2 via Contrastive Activation Addition*. arXiv preprint arXiv:2312.06681.\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Nice new steering method!\", \"review\": \"Proposes a novel activation steering method that is a pareto improvement on existing method by incorporating a lot of additional mechanisms that bridge gaps in past steering techniques.\\n\\n**Quality**\", \"pros\": [\"A step forward in controller text generation for LLMs, maybe when we learn to extract better features, this can be applied more effectively in real life scenarios\", \"Programmatic feature selection is a major contribution to steering.\"], \"cons\": [\"Very recent (<1 week) work suggest some degree of entanglement between broadly related features (ie 'good' things get grouped together same goes for the 'bad' things), it's the emergent misalignment paper. If stronger models tend to have such structures, the promise of steering is put in question: \\\"why do we need fine grained control over individual features if we can just tell the model to be good?\\\". Not inherently a flaw of this work but wanted to point out.\"], \"rating\": \"8\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"FGAA is a well motivated and promising improvement over SAE-TS and CAA\", \"review\": \"Summary: The paper introduces Feature Guided Activation Additions (FGAA), a new method for steering LLMs that builds upon CAA and SAE-TS.\", \"strengths\": \"1. FGAA is a novel and well motivated method. The three steps of feature filtering (density filtering, bos feature removeal, top-k selection) are explained and motivated. \\n2. FGAA has strong performance on both steering performance and output quality. The method narrowly beats SAE-TS, but clearly outperforms CAA and SAE.\\n3. FGAA is compared against alternative activation steering methods (CAA, SAE, SAE-TS). While other steering methods have been published recently, the comparison to the chosen steering methods is natural. The number of tasks (9), models (2) is sufficient for showing the potential of FGAA over its alternatives.\\n4. Using different Evaluation metrics for measuring steering method performance: BCS with gpt-4o-mini to assess both behavioral alignment and coherence. And also evaluate effects of steering on general model capabilities via measuring perplexity on OpenWebText, and performance on MMLU and MMLU-Pro.\", \"weaknesses\": \"1. For the paper to be considered stronger, more thorough comparisons of FGAA to CAA, SAE, SAE-TS and other alternatives need to be made. More tasks, models could be evaluated. Also tests for statistical significance of FGAA's superior performance over SAE-TS. The current comparison is sufficient for a workshop-level paper, but not for a full conference submission.\\n2. Visual presentation. The paper contains a lot of white space. Figure 1 should have larger fonts to be more readable. Table 1 goes over the linewidth. Figure 4 has a simple message but takes up a lot of space. Figure 5 should have a larger font size for legend and axis labels. \\n3. FGAA builds upon the main ideas from CAA and SAE-TS and is more of an incremental improvement by combining existing ideas. The results of FGAA are only marginally better for many tasks (Anger, London, Wedding, ...)\", \"recommendation\": \"Good paper, accept. Overall, the paper introduces a novel steering method that is potentially an improvement over CAA and SAE-TS. More Analysis would be needed to clearly show its advantage over CAA and SAE-TS to make the paper more significant.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
sjwX4Vif03
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
[ "Zhiying Zhu", "Yiming Yang", "Zhiqing Sun" ]
Hallucinations pose a significant challenge to the reliability of large language models (LLMs) in critical domains. Recent benchmarks designed to assess LLM hallucinations within conventional NLP tasks, such as knowledge-intensive question answering (QA) and summarization, are insufficient for capturing the complexities of user-LLM interactions in dynamic, real-world settings. To address this gap, we introduce HaluEval-Wild, the first benchmark specifically designed to evaluate LLM hallucinations in the wild. We meticulously collect challenging (adversarially filtered by Alpaca) user queries from ShareGPT, an existing real-world user-LLM interaction datasets, to evaluate the hallucination rates of various LLMs. Upon analyzing the collected queries, we categorize them into five distinct types, which enables a fine-grained analysis of the types of hallucinations LLMs exhibit, and synthesize the reference answers with the powerful GPT-4 model and retrieval-augmented generation (RAG). Our benchmark offers a novel approach towards enhancing our comprehension of and improving LLM reliability in scenarios reflective of real-world interactions.
[ "Large Language Models", "Hallucination", "Factuality", "Trustworthy", "Evaluation" ]
Accept
https://openreview.net/pdf?id=sjwX4Vif03
https://openreview.net/forum?id=sjwX4Vif03
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "zvl5pg4u3f", "tIG3UeLQlB", "ju48DHzGlo", "3mwcNjGuM3" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1740856431280, 1740916954230, 1740810410629, 1740401098055 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission92/Reviewer_ypAP" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission92/Reviewer_WrTC" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission92/Reviewer_CRWM" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Good explanation of the benchmark generation steps, but a more in-depth analysis of the results would be valuable\", \"review\": [\"### Summary\", \"The paper proposes a benchmark for testing LLM hallucinations on \\\"real-world\\\" questions, mostly sourced from ShareGPT. This adds value since standard hallucination benchmarks typically focus on standardized tasks (e.g., summarization, machine translation), which do not fully represent real-world interactions with LLMs. The benchmark defines a pipeline for selecting hallucination-prone questions, categorizing them into several groups, and evaluating model hallucinations using an LLM judge. The \\\"ground truth\\\" answers are generated using a RAG approach, retrieving passages from a search engine.\", \"### Strengths\", \"While there may be room for improvement, the authors focus on thoroughly developing each step of the benchmark pipeline.\", \"The authors conduct experiments to validate the effectiveness of the RAG approach.\", \"The benchmark\\u2019s limitations are clearly stated.\", \"### Weaknesses\", \"It is unclear to me whether the filtering approach effectively selects the most hallucination-prone questions. Some counterfactuals or examples would be valuable.\", \"The result analysis could be expanded. It is unclear how hallucination scores compare to non-\\\"in the wild\\\" questions, given different scoring scales and significant hallucination rates in both cases.\", \"The paper format is in two-column layout and has not been updated for ICLR.\"], \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"The paper presents HaluEval-Wild, a benchmark for evaluating hallucinations in LLMs using real-world queries, highlighting its structured categorization and mitigation strategies while noting limitations in model choices, dataset size, and reference answer bias.\", \"review\": \"Summary [This paper introduces HaluEval-Wild, a benchmark designed to evaluate hallucinations in large language models (LLMs) in real-world settings. The authors curate 500 challenging user queries from ShareGPT, filtering them using Alpaca and categorizing them into five types. They generate reference answers using GPT-4 with retrieval-augmented generation (RAG) and evaluate various LLMs on their hallucination rates. The study also explores self-reflection techniques for mitigating hallucinations.]\\n\\nStrengths\\n[Real-World Focus: Unlike traditional hallucination benchmarks, HaluEval-Wild captures user interactions in the wild, making it more relevant for practical applications.\", \"structured_categorization\": \"The classification of hallucinations into five distinct types allows for a more fine-grained analysis of model failures.\", \"evaluation_across_multiple_models\": \"The paper provides comparative hallucination rates for various open-source and closed-source LLMs.\", \"mitigation_strategies\": \"The study tests self-reflection techniques, providing insights into how LLMs can be improved to reduce hallucinations.\", \"comprehensive_dataset_collection_pipeline\": \"The methodology, including adversarial filtering with Alpaca and manual verification, ensures that the dataset contains genuinely challenging queries.]\\n\\nWeaknesses\\n[Justification for Model Choices:\\nThe paper uses Llama 2-7B to classify queries but Alpaca for adversarial filtering.\\nIt is unclear why two different models were used instead of a single model for both steps.\\nAdditional clarification on why Alpaca was chosen over other baseline models for filtering is needed.\", \"dataset_size_limitation\": \"The benchmark contains only 500 queries, which may be insufficient for evaluating generalization across different models and settings.\\nA larger dataset would improve statistical robustness and generalizability.\", \"lack_of_explicit_answer_type_differentiation\": \"The paper states that hallucinations are judged based on correct responses and cases where a model admits it doesn\\u2019t know.\\nHowever, there is no explicit dataset categorization distinguishing these response types.\\nA separate label for \\\"acknowledging lack of knowledge\\\" could be useful.\", \"small_scale_human_evaluation_in_section_f\": \"The GPT-4 vs. human evaluation comparison in Appendix F only uses 20 samples, which is too small to draw strong conclusions.\\nA larger-scale human evaluation would improve confidence in the findings.\", \"potential_bias_from_gpt_4_reference_answers\": \"Since GPT-4 is used to generate reference answers, its hallucination tendencies may introduce bias into the evaluation.\\nThe paper could benefit from human-curated reference answers or a comparison between GPT-4-generated and human-generated references.]\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Review\", \"review\": \"### Summary\\nThis paper introduces HaluEval-Wild, a benchmark for evaluating hallucinations in LLMs using real-world user queries from ShareGPT. The dataset consists of 500 adversarially filtered queries, categorized into five types (Out-of-Scope, Complex Reasoning, Inappropriate Content, Beyond-Modality, and Confused Queries).\\nThe study compares hallucination rates across multiple LLMs (GPT-4, GPT-3.5, Mixtral, Mistral, Llama-2, Vicuna, and Alpaca) and finds that knowledge-distilled models tend to hallucinate more. It also evaluates Self-Reflection (SR) and Retrieval-Augmented Generation (RAG), demonstrating that RAG reduces hallucination rates from 20% to 5% in GPT-4.\\nThe benchmark, available on GitHub, provides a real-world evaluation framework for improving LLM reliability.\\n\\n### Strongness\\n- Unlike traditional hallucination benchmarks, this study leverages real-world user queries from ShareGPT, making the evaluation more reflective of practical challenges faced by LLMs.\\n- The five-category classification of hallucination-prone queries enables a fine-grained analysis, allowing researchers to understand the weaknesses of LLMs across different types of challenges.\\n- The paper rigorously tests Self-Reflection (SR) and RAG, demonstrating their effectiveness in reducing hallucinations. This provides practical guidance for improving LLM reliability.\\n\\n### Weakness\\n- The study evaluates a range of LLMs, but lacks results for state-of-the-art proprietary models such as Claude and Gemini, which could provide a more complete picture of hallucination rates.\\n- While RAG significantly reduces hallucination rates, its effectiveness depends on the quality and recency of retrieved information. The paper does not discuss potential biases or limitations introduced by retrieval-based methods.\", \"rating\": \"6\", \"confidence\": \"4\"}" ] }
sa25C7OGFh
Analyzing Memorization in Large Language Models through the Lens of Model Attribution
[ "Tarun Ram Menta", "Susmit Agrawal", "Chirag Agarwal" ]
Large Language Models (LLMs) are prevalent in modern applications but often memorize training data, leading to privacy breaches and copyright issues. Existing research has mainly focused on post-hoc analyses—such as extracting memorized content or developing memorization metrics—without exploring the underlying architectural factors contributing to memorization. In this work, we investigate memorization from an architectural lens by analyzing how attention modules at different layers impact its memorization and generalization performance. Using attribution techniques, we systematically intervene in the LLM's architecture by bypassing attention modules at specific blocks while keeping other components like layer normalization and MLP transformations intact. We provide theorems analyzing our intervention mechanism from a mathematical view, bounding the difference in layer outputs with and without our attributions. Our theoretical and empirical analyses reveal that attention modules in deeper transformer blocks are primarily responsible for memorization, whereas earlier blocks are crucial for the model's generalization and reasoning capabilities. We validate our findings through comprehensive experiments on different LLM families (Pythia and GPT-Neo) and five benchmark datasets. Our insights offer a practical approach to mitigate memorization in LLMs while preserving their performance, contributing to safer and more ethical deployment in real-world applications.
[ "Memorization", "Generalization", "Language Models", "Model Attribution", "Attention" ]
Accept
https://openreview.net/pdf?id=sa25C7OGFh
https://openreview.net/forum?id=sa25C7OGFh
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "xzkOVt0FFT", "xDM0KHbM6j", "mJk9P3LKiF", "bJgMKHLcM2" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741109512206, 1740880813574, 1740977599732, 1740870886046 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission55/Reviewer_FiZ3" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission55/Reviewer_f59i" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission55/Reviewer_udMV" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"comment\": \"Reviewers agree that the study is well-executed, novel, and relevant to understanding LLM memorization. Strengths include clear theoretical framing, experimentation, and practical implications for mitigating memorization while preserving generalization. However, some concerns remain about the generalizability of results to larger models and the theoretical explanations for why later layers contribute more to memorization.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Anonymous Review\", \"review\": \"The paper introduces a \\\"short-circuit\\\" method to bypass attention modules in transformers by replacing attention weights with that of the identity matrix. This is identical to using value (V) projection of the token as SDPA (Q, K, V) output. This intervention is used to bypass attn modules and study memorization in decoder block only transformers. Further, paper provides a theoretical framework explaining how disabling attention at different depths affects memorization and generalization. Additionally, the work also discusses an approach to mitigating memorization while maintaining generalization.\\n\\nIn their experiments, it is empirically shown that this short-circuiting of attention reduces memorization whilst impact on text generation performance is dependent on which layers' head is it applied on. Short-circuiting later blocks reduce memorization with minimal cost. \\n\\nThis paper aids the study of memorization and generalization trade-off and shows how attention module reacts to both with and without their method. Their method showcases acceptable performance in reducing memorization with minimal impact on generalization. The paper also matches the workshop and the venue.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Focusing on Role of Attention Blocks in Memorization\", \"review\": \"This paper examines the role of attention blocks in implementing memorized sequences in large language models. They introduce a procedure of \\\"short-circuiting\\\" attention blocks and show that while initial attention blocks play a major role in implementing both general capabilities and memorization, later layers can be effectively \\\"short circuited\\\" (reverted to the identity) in order to mitigate memorization without harming general model capabilities. This is empirically tested via experiments on GPT-Neo and Pythia models. I think that the this paper has some interesting contributions: firstly most prior work focuses on examining the role of MLP components for storing memorization (in the form of key-value) pairs while this paper focuses on the attention mechanism. While the empirical findings are somewhat interesting, however, I find the theoretical analysis unconvincing. It seems to me that the primary conclusion of the theory is that the difference associated with short-circuiting an attention block accumulates across layers, which is unsurprising. While this explains the role played by later layer short-circuiting as a mechanism for roughly maintaining the original input distribution, it does not explain why memorization should necessarily compound in later attention blocks -- which is the primary claim made by this paper. I also believe that the paper could be stronger if the impact on general validation loss of short-circuiting attention blocks. This could provide another lens on the impact on general model performance. Finally, I think the paper would be more compelling if a stronger evaluation was used to measure memorization. Currently, the evaluation is simply exact match, however, this may provide a false sense of security against other prompting strategies that could elicit memorized sequences.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Review: Analyzing Memorization in Large Language Models through the Lens of Model Attribution\", \"review\": \"The paper \\\"Analyzing Memorization in Large Language Models Through the Lens of Model Attribution\\\" explores the issue of memorization in LLMs and investigates architecture specific details of LLMs contributing to memorization. The authors focus solely on the attention mechanism, claiming reasoning of LLMs are largely derived from multiheaded self attention, as opposed to the FFN and other layers. The introduction is well written and the primer/ refresher on decoder only LLM architecture and the self attention mechanism is a nice touch, however a lot of terminology is not well defined in this section. The authors propose a novel \\\"attention short circuiting\\\" mechanism where attention layers at any given layer is replaced by an identity matrix multiplied by the value matrix, keeping the FFN and others untouched. Despite the narrow focus on the attention mechanism, analysis into architecture specific details contributing to memorization is a great study. The authors findings suggest that deeper attention layers contribute more to memorization, whereas earlier layers are essential for reasoning and generalization.\\n\\nThe approach is validated using Pythia and GPT-Neo model families on a handful of benchmark datasets, showing a practical method to reduce memorization while preserving performance. The results generalize across different scales of the Pythia model family, indicating that the technique is not only effective on smaller models but could also be feasible for larger models. While the experimental results are compelling, the focus on smaller LLMs, in particular LLMs which share the same huggingface attention layer (gpt-neo-x) is concerning as it does not represent a large enough sample to generalize to other LLMs. It is not convincing that the short circuit mechanism works in general, reducing memorization while keeping reasoning ability intact. It seems quite invasive to short circuit the attention layer, and while the results are compelling, t would be nice to see more extensive experimentation to support the claim. \\n\\nThe quality of this work is high, and the clarity is acceptable. The mathematical rigour is great and easy to read. While there are some limitations, the strengths outweigh them. The proposed approach has the potential to influence future research on ethical AI deployment and architectural understanding of memorization in LLMs.\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
s8OCncXdpZ
Towards Effective Discrimination Testing for Generative AI
[ "Thomas P Zollo", "Nikita Rajaneesh", "Richard Zemel", "Talia B. Gillis", "Emily Black" ]
Generative AI (GenAI) models present new challenges in regulating against discriminatory behavior. We argue that GenAI fairness research still has not met these challenges; instead, a significant gap remains between bias assessment methods and regulatory goals. This leads to ineffective regulation that can allow deployment of reportedly fair, yet actually discriminatory, GenAI systems. Towards remedying this problem, we connect the legal and technical literature around GenAI bias evaluation and identify areas of misalignment. Through four case studies, we demonstrate how this misalignment can result in discriminatory outcomes in real-world deployments, especially in adaptive or complex environments. We offer practical recommendations for improving discrimination testing to better align with regulatory goals and enhance the reliability of fairness assessments in the future.
[ "algorithmic fairness", "generative AI", "language models", "text-to-image", "evaluation", "AI regulation", "algorithmic discrimination", "anti-discrimination law" ]
Accept
https://openreview.net/pdf?id=s8OCncXdpZ
https://openreview.net/forum?id=s8OCncXdpZ
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "rZMhqjAint", "KlEW0IhFEh", "GRvkNIkQ2Q", "6rQZalmUlb" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740869104932, 1740857338049, 1741082532334, 1740340836217 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission18/Reviewer_UrC9" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission18/Reviewer_NXfA" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission18/Reviewer_oiU3" ] ], "structured_content_str": [ "{\"title\": \"This paper presents a compelling critique of fairness testing in Generative AI, effectively demonstrating its misalignment with regulatory goals but it does not provide concrete, implementable solutions for standardizing discrimination testing.\", \"review\": [\"## Strengths\", \"The paper addresses a major regulatory challenge in AI fairness, discussing how current bias testing methods are inadequate\", \"The authors provide a strong interdisciplinary analysis, linking anti-discrimination law with fairness testing methodologies to illustrate regulatory shortcomings in AI governance\", \"The resume screening, red teaming variability, and multi-turn interaction case studies offer practical, real-world examples of how flawed fairness testing can allow discrimination to go undetected\", \"The paper makes a strong argument that GenAI fairness testing is inconsistent, urging the research community to develop standardized, robust, and repeatable evaluation frameworks\", \"## Weaknesses\", \"The paper does not introduce a concrete, implementable framework for standardizing bias testing in GenAI systems\", \"The paper does not provide a formal mathematical framework for the proposed fairness evaluation improvements\", \"The resume screening and red teaming experiments rely on LLM-generated synthetic data and simulations, which may not fully reflect real-world biases and human decision-making\", \"The study critiques existing fairness tests but does not compare its proposals against alternative approaches, such as causal inference methods or adversarial debiasing techniques\", \"While the paper highlights the importance of multi-turn fairness evaluation, it does not discuss the computational challenges of testing models in long, dynamic interactions at scale\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Important evidence on the tenuous state of evaluations in the context of decision-making\", \"review\": [\"Quality: 6\", \"Pros: unique integration of policy/technical work to add to the body of evidence that evaluations need to be more robust.\", \"Cons:\", \"Paper missing a conclusion\", \"Case Study 1: Have you run power calculations to determine the sample size of 250? Are the differences statistically significant? Why do you choose to include a range of personal characteristics (Table 2) that are typically not observed in the hiring process? This seems compromising to external validity. Line 227 and related stats -- should these refer to the gap as \\u201cpercentage points\\u201d not \\u201c%\\u201d? What exactly is the gap that is reported? Which races are accepted more than others? Is the difference statistically significant?\", \"Figure 1 is confusing: are the challenges meant to only refer to those in discrimination law (as the reference on the left would suggest)? What is the use case considered? In the resume example, presumably the GenAI model is still giving a recommendation for a decision?\"], \"clarity\": [\"5\", \"Pros: useful diagrams and helpful context in the appendix. Rich with detail.\", \"Cons: the framing of the paper is confusing as a reader for a number of reasons:\", \"I understand the heart of this paper as \\\"evaluations are tenuous and not robust and therefore making policy decisions using them as basis is unwise. As a solution, we need to make evaluations and the reporting of evaluations more robust and transparent.\\\" This could be more clearly stated in the paper.\", \"-The abstract states that \\\"a significant gap remains between bias assessment and regulatory goals,\\\" however many of the policy documents that the authors cite are in fact standards, guidelines, and voluntary measures, which are categorically *not* regulations. Be wary of using language like \\\"regulation\\\" or \\\"mandate,\\\" as in line 154, if the guideline is not in fact a regulation and/or enforced. Even the policy documents that *are* binding (e.g. the EU AI Act) do not yet clearly define how evaluations map to requirements -- this is currently being defined in the Codes of Practice, to be finalized in May. Including these details more clearly in the framing of the piece would be useful.\", \"The mix of technical and policy language content throughout the piece is confusing and the intended audience is unclear (policymakers? technical researchers?). This could be resolved by more clearly separating policy details as motivation; though even after doing this, if the target audience is technical, then the density and specificity of policy details should likely be pared down or placed in the appendix.\"], \"originality\": [\"5\", \"Pros: original experiments are interesting.\", \"Cons: much has been written on evaluations being sensitive to slight perturbations (i.e. https://www.apolloresearch.ai/blog/we-need-a-science-of-evals) -- it would be useful to add such literature and framing to this piece.\"], \"significance\": \"5\\n- Pros: this piece adds to the body of evidence that evaluations need to be more robust.\\n- Cons: The authors state, \\u201cin light of this, we contend that progress in technical methodologies\\nfor bias assessment must precede policy-making efforts to enable reliable discrimination testing.\\u201d Do the authors propose pausing policymaking (which is often quite a slow process)? Or expediting eval development? Or doing the two simultaneously?\", \"rating\": \"6\", \"confidence\": \"5\"}", "{\"decision\": \"Accept\", \"comment\": \"Please address the reviewer concerns\", \"title\": \"Paper Decision\"}", "{\"title\": \"Beautiful paper on pressing issue.\", \"review\": [\"This paper tackles an important issue: how do we regulate the safety of GenAI given (a) an infinite number of use cases, (b) that differences in test procedures can produce different results, (c) that standard frameworks lack translation to Gen AI, and (d) a lack of standardized metrics and testing protocols? The authors present a series of compelling case studies with experiments and clear mitigation suggestions. Overall, I found the paper important, well-written, and thought-provoking. Because I agree with the major claims of this work, I will focus this review on questions and areas for further clarity:\", \"[Case Study 1] Do you expect the following finding to persist if the resumes (rather than summaries) were provided?: \\u201cHowever, as shown in the right plot of Figure 2, based on summaries from Llama-2-7B, the LLM decision-maker selects white candidates for interviews at a 5% higher rate than Black or Hispanic candidates, despite the underlying resumes being exactly the same.\\u201d\", \"[Case Study 1] Why choose a single profession: social worker? Do you think the experiments would replicate beyond this profession? For instance, what would happen if you chose a that is more likely to be the subject of fine-tuning efforts (e.g., CEO, surgeon, computer scientist)?\", \"[Case Study 1] Related to the previous point, social work is overwhelmingly dominated by women. I would be interested in breakdowns by gender in addition to race. Do summaries for men and women differ because social work is a female-dominated field?\", \"In the spirit of standardization, could results be provided in the form of effect sizes (e.g., cohen's d) instead of or in addition to percentages? In many fields, a difference of <= 5% would not translate to a significant effect size. Of course, I recognize that even small effects have big implications when one considers the sheer number of people being impacted.\", \"My *biggest* question is whether regulators should be the ones who offer \\\"guidance on how to test for violations of these principles, creating an opportunity for developers and/or deploying parties to (intentionally or unintentionally.\\\" Consider how slowly regulation is passed. By the time a law/act/regulation is passed, ML practitioners may have access to even stronger tools. As such, it could be a disservice to force a standardized set of practices (or even guidelines) upon developers or researchers. Instead, would the implementation of peer-reviewed safety standards make sense? Or, a combination of standard red teaming tests and realistic use cases (a la OMB) be appropriate? In any case, this paper was compelling and enjoyable to read.\"], \"rating\": \"9\", \"confidence\": \"4\"}" ] }
rQf6YtHfB5
Automated Capability Discovery via Model Self-Exploration
[ "Cong Lu", "Shengran Hu", "Jeff Clune" ]
Foundation models have become general-purpose assistants, exhibiting diverse capabilities across numerous domains through training on web-scale data. It remains challenging to precisely characterize even a fraction of the full spectrum of capabilities and potential risks in any new model. Existing evaluation approaches often require significant human effort, and it is taking increasing effort to design ever harder challenges for more capable models. We introduce Automated Capability Discovery (ACD), a framework that designates one foundation model as a scientist to systematically propose open-ended tasks probing the abilities of a subject model (potentially itself). By combining frontier models with ideas from the field of open-endedness, ACD automatically and systematically uncovers both surprising capabilities and failures in the subject model. We demonstrate ACD across a range of foundation models (including the GPT, Claude, and Llama series), showing that it automatically reveals thousands of capabilities that would be challenging for any single team to uncover. We further validate our method's automated scoring with extensive human surveys, observing high agreement between model-generated and human evaluations. By leveraging foundation models' ability to both create tasks and self-evaluate, ACD is a significant step toward scalable, automated evaluation of novel AI systems.
[ "large language models", "foundation models", "automated evaluation", "model self-exploration" ]
Accept
https://openreview.net/pdf?id=rQf6YtHfB5
https://openreview.net/forum?id=rQf6YtHfB5
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "rGIGl5baUV", "r2Vuw7ZUKn", "mfIHPMWBRR", "EvSrH9yULv" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740424942620, 1740703744479, 1741109412133, 1740855275210 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission48/Reviewer_r6aF" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission48/Reviewer_9FBa" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission48/Reviewer_WK1A" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"review\": \"Summary: This paper proposes Automated Capability Discovery (ACD), a framework that systematically proposes open-ended evaluation tasks to discover new capabilities and failures of models.\", \"strengths\": [\"The problem of model evaluation has become increasingly more important and difficult, particularly as people begin to interact with models in unexpected and potentially dangerous ways. Current evaluations are often quite ad-hoc and limited, and this paper proposes an interesting approach to induce more structure in coming up with new tasks for evaluations and discovering failures and capabilities of new models.\", \"The proposed framework is very flexible and provides an interesting approach to scalable oversight, where human oversight is aided by models that are good at /proposing/ tasks, even if they are incapable of performing them themselves.\"], \"weaknesses\": [\"The evaluation of the proposed automated discovery approach is a bit lacking at the moment. In particular, it isn't clear whether the authors were able to derive new conclusions about models that have not or could not be found with existing evaluation or human-based interaction approaches. Essentially, how does the method compare to the existing baseline of human evaluations? More discussion on this point would be very beneficial.\", \"There was no verification of the validity of the \\\"interesting and new\\\" metric the authors used to evaluate the diversity of the discovered tasks. It would have been interesting to include this in the human survey conducted to verify that the tasks were actually novel with respect to each other.\", \"It is unclear whether the discovered tasks and capabilities are core, intuitive tasks, or if the method was truly able to uncover fringe, unexpected behavior of the models. Were the authors able to find any safety-critical failures of models or any completely unexpected abilities that weren't previously known?\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Review\", \"review\": \"Summary\\n\\nThe paper introduces Automated Capability Discovery (ACD), a novel framework that leverages foundation models such as such as GPT, Claude, and Llama, to evaluate themselves or other models by autonomously generating and testing new tasks. ACD choses one model as a scientist to create open-ended challenges for another subject model, mimicing on manually designed benchmarks with human-trial-and-error. ACD iteratively probes the capabilities and weaknesses of the subject model, enabling scalable and systematic evaluation across reasoning, math, code generation, and creative problem-solving. \\nBeyond detecting failure modes, ACD also facilitates cross-model comparisons by curating a repository of tasks that can be used to benchmark different AI systems under consistent evaluation conditions. The results show that different scientist models generate distinct probing challenges, suggesting that an ensemble approach could enhance capabilities coverage. The authors argue that ACD is a step toward scalable and continuous AI evaluation, helping improve model reliability and safety by surfacing potential risks before deployment. \\n\\nReview\\n\\nThe paper addresses a crucial challenge in AI evaluation by proposing an automated framework that scales beyond human-designed benchmarks. ACD effectively leverages the generative capabilities of foundation models to explore their own strengths and weaknesses, reducing human effort while increasing coverage of model capabilities and failures. The results are compelling, demonstrating ACD\\u2019s ability to generate meaningful tasks and validate its findings with human evaluations. The ability to use different scientist models for task generation is also a good contribution, as it shows that diverse models can uncover different aspects of AI performance. Overall, I think this is a well-executed study that offers both efficiency and depth in AI model capability discovery. While there are areas for future improvement, e.g., extend to multi-modal models, the ACD framework represents a meaningful advancement in AI model assessment.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"comment\": \"The reviewers agree that ACD is a well-executed and timely approach to tackling the increasingly difficult problem of evaluating foundation models. The paper is a good fit for the workshop, as model evaluation is crucial for building trust in AI systems. The flexibility of ACD in leveraging different \\\"scientist\\\" models and its ability to uncover previously unknown behaviors further enhance its relevance. It would have been nice to see discussion on whether ACD uncovers safety-critical failures or merely expected model behavior.\", \"title\": \"Paper Decision\"}", "{\"title\": \"ACD contributes needed innovation to the field of evals; room for more clarity\", \"review\": [\"Quality: 8\", \"Pros: authors make a meaningful contribution to evaluations, fostering innovation and acceleration amidst rapidly evolving models.\", \"Cons/areas for improvement: a) Authors might consider clearly stating what evaluation objectives ACD can accomplish and which it cannot (i.e. can it be used for benchmarking capabilities between models? can it be used to probe specific dangerous capabilities? are there any tradeoffs of its exploratory nature?) b) Figure 1 does\\u2019t accurately represent the status quo evaluation alternative to ACD \\u2014 the human counterfactual is quite reductive and simplistic and there are in fact options between this and ACD. c) Other minor issues: Gpt-4o is repeated twice in line 1128, should line 939 read \\\"you should *not* use this function\\\"?\"], \"clarity\": [\"7\", \"Pros: graphics, description, and supplementary materials portray process clearly.\", \"Cons/areas for improvement: a) More information could also be provided on the humans in the large-scale human surveys (i.e. demographics, country, education, age, etc.). b) Clarify where human decision-making comes into play in ACD's process. How does the scientist model know if it requires multiple refinements (line 984)? Can users request that ACD investigate particular task families? c) Clarify whether the judge always scores binarily and, if so, how this decision was made (i.e. relative to a more granular, continuous score). Does the judge respond differently if the system prompt is slightly tweaked? How is the criteria created for the LLM judge and what limitations might exist? d) Clarify whether this is for foundation models more generally (e.g. different modalities) or just LLMs. Both terms are used.\"], \"originality\": [\"9\", \"Pros: dynamic/automated evaluations are underexplored and this is a useful contribution.\", \"Cons: Other work has been done on autograders; the authors could acknowledge the range of variation in LLM-as-a-judge, explain their design decisions for this particular judge, limitations to the particular LLM-as-a-judge approach, and variations to explore in future research.\"], \"significance\": [\"7\", \"Pros: see \\\"quality\\\"\", \"Cons: to give context to the significance of this piece, the authors could provide more information on comparisons between the performance/abilities of ACD relative to other similar tools in the field. In what ways does it outperform other evaluation tools and it what ways does it fall short? b) D1 shows that new tasks emerge but do the new tasks give us sufficiently new information about the models and real-world capabilities? i.e. how does variance of tasks change over generation?\"], \"rating\": \"8\", \"confidence\": \"4\"}" ] }
rDC2UVdB0t
Siege: Multi-Turn Jailbreaking of Large Language Models with Tree Search
[ "Andy Zhou", "Ron Arel" ]
We introduce Siege, a multi-turn adversarial framework that models the gradual erosion of Large Language Model (LLM) safety through a tree search perspective. Unlike single-turn jailbreaks that rely on one meticulously engineered prompt, Siege expands the conversation at each turn in a breadth-first fashion, branching out multiple adversarial prompts that exploit partial compliance from previous responses. By tracking these incremental policy leaks and reinjecting them into subsequent queries, Siege reveals how minor concessions can accumulate into fully disallowed outputs. Evaluations on the JailbreakBench dataset show that Siege achieves a 100% success rate on GPT-3.5-turbo and 97% on GPT-4 in a single multi-turn run, using fewer queries than baselines such as Crescendo or GOAT. This tree search methodology offers an in-depth view of how model safeguards degrade over successive dialogue turns, underscoring the urgency of robust multi-turn testing procedures for language models.
[ "Large Language Models", "Jailbreaking", "Multi-Turn Attack" ]
Accept
https://openreview.net/pdf?id=rDC2UVdB0t
https://openreview.net/forum?id=rDC2UVdB0t
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "yh0HnoiScO", "UL5VX2XNBl", "4K9Urk7jCV" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741057169280, 1740869976715, 1740687381566 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission128/Reviewer_63A4" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission128/Reviewer_bMGo" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of \\\"TEMPEST: Multi-Turn Jailbreaking of Large Language Models with Tree Search\\\"\", \"review\": [\"The paper introduces a method for\\u00a0jailbreaking LLMs\\u00a0(i.e., making them generate potentially harmful outputs that have been discouraged during the alignment process). The approach is based on\\u00a0multi-turn conversations, where each turn is encoded as a node in a tree. It employs\\u00a0breadth-first tree search\\u00a0to obtain the requested, though forbidden, response. Nodes that elicit\\u00a0partial compliance\\u00a0from the model are considered promising and are used as starting points for the next conversation turn. The method outperforms other jailbreaking baselines while requiring fewer queries to the model.\", \"**Positive Aspects:**\", \"Effective, intuitive, and reasonably simple\\u00a0method for jailbreaking LLMs.\", \"Clear explanation\\u00a0of the method.\", \"Comparison to several baselines,\\u00a0with the proposed method demonstrating clear superiority.\", \"**Negative Aspects:**\", \"The paper lacks sufficient\\u00a0experimental details\\u00a0to ensure reproducibility. For example,\\u00a0what are the exact prompts used for the attacker LLM?\", \"The\\u00a0partial compliance function\\u00a0$\\\\gamma$\\u00a0appears to be a crucial component of the method, but the paper does not explain how compliance is measured in practice.\", \"**Additional Comment:**\", \"Why is the method called \\\"TEMPEST\\\"?\\u00a0The capitalization suggests that it is an acronym, but this is never explained. Simply naming a method for the sake of branding feels like a\\u00a0marketing trick, which, in my view, is not appropriate for a research paper.\"], \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"TEMPEST: Multi-Turn Jailbreaking of Large Language Models with Tree Search\", \"review\": \"This paper presents a compelling examination of a critical flaw in AI safety\\u2014while many models perform well on single-turn safety tests, they often fail under multi-turn adversarial conditions. Through rigorous experimentation, the study demonstrates that its proposed approach is significantly more effective than prior methods, necessitating a reassessment of existing AI defense strategies. By highlighting the insufficiency of current safety measures, the paper makes a strong case for incorporating multi-turn adversarial testing into standard evaluation frameworks. Moreover, while the study primarily focuses on exploiting AI vulnerabilities, its methodology could also be leveraged to enhance AI security by enabling models to \\u201cremember\\u201d previous interactions rather than treating each prompt in isolation. Additionally, further evaluation on diverse datasets beyond JailbreakBench, including real-world adversarial prompts, could strengthen the findings\\u2019 generalizability. Finally, the partial compliance metric introduced in the paper is a valuable contribution, but a more detailed explanation of how intermediate compliance levels (1\\u20139) are assigned\\u2014beyond the clear-cut cases of full refusal (0) and complete compliance (10)\\u2014would enhance clarity and reproducibility.\", \"rating\": \"7\", \"confidence\": \"3\"}" ] }
r1OyAGcOcH
Data Efficient Subset Training with Differential Privacy
[]
Private machine learning introduces a trade-off between the privacy budget and training performance. Training convergence is substantially slower and extensive hyper parameter tuning is required. Consequently, efficient methods to conduct private training of models is thoroughly investigated in the literature. To this end, we investigate the strength of the data efficient model training methods in the private training setting. We adapt GLISTER (Killamsetty et al., 2021b) to the private setting and extensively assess its performance. We empirically find that practical choices of privacy budgets are too restrictive for data efficient training in the private setting. Our code can be found here.
[ "Data Efficient Training", "Differential Privacy" ]
Reject
https://openreview.net/pdf?id=r1OyAGcOcH
https://openreview.net/forum?id=r1OyAGcOcH
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "LwFTVCGFmY", "HIlDJGPEwX", "5ZXUI34pTy" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740510792573, 1740866477935, 1741104082986 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission135/Reviewer_RSKH" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission135/Reviewer_bdtx" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Data Efficient Subset Training with Differential Privacy\", \"review\": [\"**Pros:**\", \"The paper is well written and clear to me\", \"The method are efficient and reasonable\", \"**Cons:**\", \"The baselines are not enough. Only have one full dataset baseline and one subset-based baselines\", \"The evaluated datasets are too small, only with CIFAR10 and MNIST.\", \"The performance of the proposed method is not good enough.\"], \"rating\": \"5\", \"confidence\": \"3\"}", "{\"title\": \"While the paper presents a valuable investigation into the feasibility of data-efficient training in private settings, its empirical results indicate that differential privacy significantly hinders subset selection methods, raising concerns about the practical viability of GLISTER-DP.\", \"review\": [\"## Strengths\", \"The paper addresses balancing differential privacy with data-efficient training, an increasingly relevant subject in privacy-sensitive applications\", \"The inclusion of RANDOM-DP and FULL-DP as baselines allows for a comparative assessment of the proposed method\\u2019s performance under different constraints.\", \"The authors acknowledge why GLISTER-DP underperforms, discussing the restrictive nature of privacy budgets and the failure of the exponential mechanism in subset selection.\", \"## Weaknesses\", \"While the paper provides privacy guarantees, it lacks theoretical insights into how GLISTER-DP\\u2019s subset selection affects generalization in private settings, relying solely on empirical results.\", \"The key takeaway is that GLISTER-DP underperforms compared to RANDOM-DP due to excessive noise in subset selection. This diminishes the practicality of the proposed approach.\", \"While the experiments are detailed, the study focuses primarily on image datasets; it would have been beneficial to see results on text or tabular datasets to generalize conclusions.\", \"The proposed method is the slowest to converge among the three approaches, further diminishing its practical utility in real-world applications.\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}" ] }
pzj5KxFOyG
Dynaseal: A Backend-Controlled LLM API Key Distribution Scheme with Constrained Invocation Parameters
[ "Jiahao Zhao", "Fan Wu", "南佳怡", "魏来", "Yang YiChen" ]
The proliferation of edge-device interactions with cloud-based Large Language Models (LLMs) has exposed critical security vulnerabilities in traditional authentication methods like static Bearer Tokens. Existing solutions---pre-embedded API keys and server relays---suffer from security risks, latency, and bandwidth inefficiencies. We present \textbf{Dynaseal}, a secure and efficient framework that empowers backend servers to enforce fine-grained control over edge-device model invocations. By integrating cryptographically signed, short-lived JWT tokens with embedded invocation parameters (e.g., model selection, token limits), Dynaseal ensures tamper-proof authentication while eliminating the need for resource-heavy server relays. Our experiments demonstrate up to 99\% reduction in backend traffic compared to relay-based approaches, with zero additional latency for edge devices. The protocol's self-contained tokens and parameterized constraints enable secure, decentralized model access at scale, addressing critical gaps in edge-AI security without compromising usability.
[ "Authentication", "Edge Computing Security", "LLM Access Control", "Parameterized JWT Constraints", "Backend Traffic Optimization", "Zero Client-side Keys", "Replay Attack Resistance" ]
Accept
https://openreview.net/pdf?id=pzj5KxFOyG
https://openreview.net/forum?id=pzj5KxFOyG
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "xVHFyKL2FU", "YYhlugZ6Zp", "Uifn2OQQEz", "9YzabRAmGt" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740899532217, 1741107567163, 1740126390000, 1740433078496 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission3/Reviewer_t2qu" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission3/Reviewer_tcwW" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission3/Reviewer_RT6C" ] ], "structured_content_str": [ "{\"title\": \"Secure Backend-Controlled Access for LLM APIs with Parameterized Invocation Constraints\", \"review\": \"The paper introduces Dynaseal, a framework designed to address security and efficiency challenges in edge-device interactions with cloud-based LLMs. By combining cryptographically signed short-lived JWT tokens with backend-enforced invocation parameters, the authors propose a novel alternative to traditional static API keys and server relays. The core innovation lies in decentralizing authentication while maintaining strict control over model invocation parameters such as model selection, token limits, and expiration times.\\n\\nThe strengths of this work are evident in its practical relevance and security-first design. Dynaseal eliminates the need for resource-heavy server relays, reducing backend traffic by 99% (as shown in Table 1) without introducing latency for edge devices. The integration of JWT tokens with embedded constraints ensures tamper-proof authentication, and the short expiration periods (e.g., 1s) mitigate replay attacks effectively. The security analysis comprehensively addresses attack vectors, including token tampering and unauthorized parameter manipulation, demonstrating a robust defense mechanism. The paper\\u2019s structure is logical, with clear visualizations (Figures 1\\u20133) that aid in understanding the protocol\\u2019s workflow and token architecture.\\n\\nHowever, the work has notable limitations. Performance metrics are narrowly focused on traffic reduction, omitting critical evaluations such as computational overhead for token generation/validation and latency under varying network conditions. The claim of \\\"zero additional latency\\\" lacks empirical validation, particularly in scenarios with unstable connectivity. While short token expiration times enhance security, they risk high timeout rates in real-world environments, yet the paper does not propose adaptive strategies (e.g., dynamic TTL adjustments) to address this trade-off. Furthermore, the comparison with existing methods (Table 2) oversimplifies the landscape; a deeper engagement with related work, such as OAuth 2.0 extensions or federated token systems, would strengthen the novelty claim. Reproducibility concerns also arise due to sparse experimental details (e.g., LLM provider configurations, test prompts), and the hypothetical test case in Appendix A.2 limits confidence in real-world applicability.\\n\\nThe significance of Dynaseal lies in its potential to reshape edge-AI security paradigms. By enabling backend-enforced constraints on LLM invocations, the framework offers a scalable solution for modern deployments. However, broader adoption hinges on addressing network reliability challenges and expanding performance evaluations. Future work should explore adaptive token management, benchmark against state-of-the-art authentication frameworks, and provide open-source implementations to facilitate reproducibility.\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"decision\": \"Accept\", \"comment\": \"The reviewers find Dynaseal to be a novel and relevant contribution but also highlight critical gaps in performance evaluation, security analysis, and experimental rigor. The paper is marginally above the acceptance threshold due to its strong motivation and potential impact, but its claims require stronger empirical support especially by benchmarking against industry-standard authentication frameworks.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Token-Based LLM Invocation for Edge Devices: Promising Results, but Methodology Needs Clarification\", \"review\": \"This paper proposes a novel model invocation approach for edge devices that balances the low latency of direct client-to-LLM-server connections with the security of a relay-based system. Instead of directly authenticating with the LLM server or solely relying on a relay, the proposed approach uses a hybrid method. The client first obtains an authentication token from a relay server (backend). This token is then used by the client to directly query the LLM service. This approach introduces challenges, such as the potential for token reuse which the authors address by implementing techniques like extremely short token validity periods.\\n\\nTo evaluate the effectiveness of their approach, the authors compare it against both direct authentication and traditional relay setups. Their experimental results demonstrate that the proposed method reduces the load on the relay server by 99% compared to a standard relay system while maintaining security comparable to a relay system. The main claim is that by introducing this token-based approach, the authors reduce the load, increase the security and keep the latency low.\", \"strengths\": [\"Demonstrates a new model invocation approach where edge devices retrieve authentication tokens from the backend server which is then used to query an LLM service\", \"Technique demonstrates a substantial reduction in bandwidth usage of the backend server while ensuring that the system is resistant to attacks in ideal situations\"], \"weaknesses\": [\"It is unclear to me whether the paper is a good fit for this workshop. While it is written with LLMs in mind, the same setup can be used for any generic scenario where a client needs to rely on a separate entity to process a large amount of information.\", \"In the same vein, it would be ideal if the authors could examine approaches seen in traditional networks in the field of computer science instead of just LLM providers.\", \"A lot of information is missing on how the experiments were conducted:\", \"The authors mention that various LLM service providers were used but do not go into detail about what these are (lines 196). Are these service providers on the web like OpenAI or were these local service providers that were used for the experiment?\", \"What was the latency between the various parts of the network like the client, backend and LLM provider?\", \"How are these digital signatures created? A simple citation would be sufficient to get an idea.\", \"How short are these \\\"extremely short validity periods\\\" for the duration of the authentication tokens and how are they determined? Do they account for latency based on large-scale geographical distance?\", \"(minor) It's not clear what the values in Table 1 are supposed to represent. I'm assuming the B in the caption refers to bytes but this should be named more explicitly\", \"(minor) While the 99% reduction in bandwidth usage for the backend is great, it is unclear if this leads to substantial improvements in practice. It would be ideal if the authors could provide citations of such systems where the backend server is the bottleneck.\"], \"question\": [\"Really minor, but why is the technique called dynaseal?\"], \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"Review of \\\"Dynaseal: A Backend-Controlled LLM API Key Distribution Scheme\\\"\", \"review\": \"### Strengths\\nThis paper introduces Dynaseal, a backend-controlled authentication mechanism that replaces static API keys and server relays with cryptographically signed, short-lived JWT tokens. It improves security, efficiency, and scalability by enforcing fine-grained invocation constraints while reducing backend traffic by 99%. The approach is well-motivated, tackling real-world security vulnerabilities in LLM API access. The writing is clear and well-structured, with detailed explanations of token architecture, authentication flow, and attack prevention.\\n\\n### Weaknesses\\nThe paper lacks rigorous security analysis, with limited discussion of adversarial attacks beyond token replay. While it claims zero additional latency, no empirical latency measurements are provided. The scalability impact of signing/verifying JWT tokens at high loads is unexplored, and the evaluation focuses only on traffic reduction, missing real-world adversarial testing. Clarifications on experiment methodology and comparisons with industry-standard authentication mechanisms (e.g., OAuth 2.0) would strengthen the work.\", \"rating\": \"6\", \"confidence\": \"3\"}" ] }
oyrDjyasDV
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
[ "Rui Xin", "Niloofar Mireshghallah", "Shuyue Stella Li", "Michael Duan", "Hyunwoo Kim", "Yejin Choi", "Yulia Tsvetkov", "Sewoong Oh", "Pang Wei Koh" ]
Sanitizing sensitive text data for release often relies on methods that remove personally identifiable information (PII) or generate synthetic data. However, evaluations of these methods have focused on measuring surface-level privacy leakage (e.g., revealing explicit identifiers like names). We propose the first semantic privacy evaluation framework for sanitized textual datasets, leveraging re-identification attacks. On medical records and chatbot dialogue datasets, we demonstrate that seemingly innocuous auxiliary information, such as a mention of specific speech patterns, can be used to deduce sensitive attributes like age or substance use history. PII removal techniques make only surface-level textual manipulations: e.g., the industry-standard Azure PII removal tool fails to protect 89\% of the original information. On the other hand, synthesizing data with differential privacy protects sensitive information but garbles the data, rendering it much less useful for downstream tasks. Our findings reveal that current data sanitization methods create a \textit{false sense of privacy}, and underscore the urgent need for more robust methods that both protect privacy and preserve utility.
[ "Privacy", "NLP", "Text", "Reidentification", "Data Release", "Sanitization", "Anonymization" ]
Accept
https://openreview.net/pdf?id=oyrDjyasDV
https://openreview.net/forum?id=oyrDjyasDV
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "bfEv6TaMj9", "EAFrBXIAkB", "DbJGhfbW7i", "CBc36c5rLZ" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740937183599, 1740915724972, 1741103769588, 1740909210092 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission85/Reviewer_xgvo" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission85/Reviewer_fCTP" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission85/Reviewer_xepz" ] ], "structured_content_str": [ "{\"title\": \"Paper Review\", \"review\": \"> Summary\\n\\n\\u200bIn this study, the authors conduct a comprehensive evaluation of privacy leakage associated with data sanitization methods. By employing a re-identification attack model and a semantic-based privacy metric, their approach more effectively captures privacy risks compared to traditional lexical matching techniques. The experiments are thorough and supportive.\\n\\n> Pros: \\n\\n1.The topic investigated is interesting, and the findings are engaging. \\n\\n2.The experimental results are comprehensive. \\n\\n> Cons: \\n\\n1.A necessary assumption for the attacker is that they must have access to relevant auxiliary information about the subject, which may not always be easily obtainable, especially in privacy-sensitive domains. \\n\\n2.I was particularly curious about the results showing that even with $\\\\epsilon = 1024$, the privacy metric increased significantly, while utility metrics dropped substantially. It would be helpful if the authors also reported the clip norm and noise scale used in the experiments.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"The work highlights the inadequacies of current text data sanitization methods, introducing a novel dataset-level privacy metric and holistic evaluation framework, but is limited by an ambiguous definition of privacy and a restricted number of datasets\", \"review\": \"A study of text data privacy shows that current sanitisation approaches are inadequate. Current PII removal methods leave 89% of information vulnerable, and even synthetic data without differential privacy remains exploitable. While DP synthesis offers better protection, it comes at a significant performance cost.\", \"pros\": [\"This work introduces a dataset-level privacy metric that addresses the limitations of current data sanitisation methods.\", \"The authors provide a comprehensive framework with a holistic evaluation of the trade-offs in different sanitisation techniques.\"], \"disadvantages\": [\"Ambiguous definition of privacy, which poses a significant challenge in developing universally applicable privacy metrics and protection methods.\", \"Limited number of datasets - it would be beneficial to extend the research with additional datasets from other domains.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Good paper on an important topic, but missing a constructive angle.\", \"review\": \"The paper conducts a thorough evaluation of various text anonymization methods under a de-anonymization setting where side information is available to the adversary. This is a strictly stronger setting than in previous works (e.g., [1,2]), which is an important and interesting extension of the evaluation of the heuristic anonymization methods of PII-removal, and span- or text-sanythization, bringing them closer in setting to the threat model of DP. As such, I believe the paper is a good addition to the workshop.\\n\\nHowever, there are certain limitations I would like to highlight, which, if addressed by the authors in a future revision of the paper, could potentially increase the paper's impact:\\n\\n- Currently, the paper does not offer clear insights beyond what has already been known, it simply, less-interestingly so, shows that they also hold in the examined setting. Namely, the paper's key insights are: (i) DP trades utility for privacy; (ii) heuristic methods (sanythization) provide weaker privacy than DP; and (iii) PII removal does not work once the information is not surface-level. However, I believe there could be more interesting conclusions be drawn from the setting; e.g., for instance by further analyzing the type and impact of the side information and establishing concrete realistic settings where the anonymizing party does not have access to this information but the adversary does.\\n\\n- This leads over to my second point. I think the current evaluation of certain methods misses a key point: giving access to the side information also in the anonymization method. For instance, expanding the set of attributes to anonymize using [2] to the ones detected in the text, and also giving the iterative anonymizer the side information should show significant performance increases. While the authors could argue against how realistic such an experiment would be, this (i) first has to be justified in the threat model (see my point above), and (ii) could still showcase the full potential of each method, making sure that the lack of performance does not stem from simply the mismatch between the current evaluation setting and the one that was originally used when designing these methods.\\n\\n- I would also like to have a more fine-grained analysis in the experiments of where the final information leakage comes from, i.e., is it information that is still contained in the sanythized text, is it information that is only contained in the side information, or does come to make sense only once these two texts are combined. This is important as, at the moment it is not clear what use the side information has in the whole process, particularly as the privacy metric is also calculated w.r.t. to the private information in the original text and not some underlying private profile. This leads to vastly different types of leakages being masked by the metric, compare for instance the following two cases: (i) the type of leakage displayed also in the intro figure, where the side information has no role in the final private information that was extracted as all the information was still contained in the sanythized text; and (ii) e.g., some feature is present in both the original text and in the side information (e.g., age) and the two can be linked based on a seemingly irrelevant, non-private feature; then, even a perfect sanythization of the original text (w.r.t. the set of private features) will enable the reconstruction of the age feature in the current setting.\\n\\n- Finally, I think the paper currently lacks some clear outlook, takeaway, or next steps on which one could build. At the moment, it simply makes the statement that certain methods don't work at all, certain methods are too weak, and that DP makes the text useless. But there is no further analysis going into which directions would be promising to explore, or even, what would be a best-practice approach to anonymization given the current tools. I think this severely limits the potential impact of the paper, as it reads more like an evaluation report than something that one could build upon.\\n\\n\\n**References**\\n\\n[1] Y Dou et al., Reducing Privacy Risks in Online Self-Disclosures with Language Models. ACL 2024.\\n\\n[2] R Staab et al. Language Models are Advanced Anonymizers. ICLR 2025.\", \"rating\": \"6\", \"confidence\": \"5\"}" ] }
ot19dneIID
THE FUNDAMENTAL LIMITS OF LLM UNLEARNING: COMPLEXITY-THEORETIC BARRIERS AND PROVABLY OPTIMAL PROTOCOLS
[ "Aviral Srivastava" ]
Modern machine unlearning techniques for large language models (LLMs) re- main heuristic, lacking formal characterization of their fundamental computa- tional limits. We establish the first complexity-theoretic foundation for LLM un- learning, revealing intrinsic tradeoffs between efficiency, precision, and regulatory compliance. Our framework formalizes (ϵ, δ)-machine unlearning via measure- theoretic alignment of retrained and unlearned model distributions, then proves transformer-specific hardness results: exact unlearning is coNP-hard, while ap- proximate unlearning requires Ω(T 1−o(1)) time under the Exponential Time Hy- pothesis (ETH). We construct an optimal Recursive Sketch-and-Freeze pro- tocol achieving these bounds through differential privacy duality and Kronecker- product sketching. Crucially, we identify phase transitions in R´enyi unlearning cost at critical model scales (n ≈ d log k). These results provide (1) theoretical benchmarks for evaluating unlearning algorithms, (2) complexity-aware guide- lines for AI regulation, and (3) mathematically grounded verification tools for GDPR/CPRA compliance.
[ "Machine Unlearning", "Large Language Models", "Computational Complexity", "Differential Privacy", "GDPR Compliance", "AI Trustworthiness" ]
Accept
https://openreview.net/pdf?id=ot19dneIID
https://openreview.net/forum?id=ot19dneIID
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "tGTVq62b5v", "WqOHcIeHSi", "V50tghJSB2", "ENwOfc8ysu" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740904401926, 1740924314108, 1740983634853, 1740054401042 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission103/Reviewer_76V3" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission103/Reviewer_Bqem" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission103/Reviewer_U1z4" ] ], "structured_content_str": [ "{\"title\": \"Official Review of Submission103\", \"review\": \"# Summary\\nThis paper establishes a complexity-theoretic framework for machine unlearning in large language models. It characterizes inherent tradeoffs between computational efficiency, precision, and regulatory compliance. An optimal Recursive Sketch-and-Freeze protocol is proposed, exposing phase transitions with model scaling.\\n\\n# Strengths\\nThe paper\\u2019s primary strength is its rigorous complexity-theoretic analysis, offering formal hardness proofs and precise computational benchmarks for machine unlearning in LLMs. It combines differential privacy, Kronecker sketching, and recursive certification, establishing provably optimal protocols.\", \"rating\": \"7\", \"confidence\": \"2\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"This paper establishes formal complexity-theoretic limits on LLM unlearning, proving that exact unlearning is coNP-hard, and approximate unlearning requires near-linear time under ETH. While it makes strong theoretical contributions, it assumes white-box access and lacks empirical validation.\", \"review\": \"This paper provides a much-needed theoretical foundation for LLM unlearning, rigorously proving its inherent computational hardness and presenting an optimal protocol for mitigating these challenges. By identifying a trilemma between perfect unlearning, efficiency, and space complexity, the paper also makes an important policy contribution. However, no experimental results are presented to support the theoretical findings. Moreover, the impact of unlearning on model generalization and potential catastrophic forgetting also remains unaddressed. Future work should extend these results to black-box models, possibly by leveraging query-efficient methods from adversarial robustness and differential privacy literature.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"Summary\", \"review\": \"#### Summary Of The Paper:\\nThis paper presents theoretic foundation for LLM unlearning. The authors show that exact unlearning is coNP-hard and that approximate unlearning requires $\\u03a9(T1\\u2212o(1))$ time under the Exponential Time Hypothesis (ETH). They also construct a Recursive Sketch-and-Freeze protocol.\\n\\n\\n#### Strengths:\\nThe work introduces the Recursive Sketch-and-Freeze protocol and provides provides some formal theoretical definitions and theorems in the context of LLM unlearning./\\n\\n\\n#### Weeknesses:\\nThe authors did not provide a literature review. The paper lacks a brief introduction to other works in this area.\\nThe authors did not provide any examples or analysis of their theorem in relation to publicly available models such as the LLaMA or Mistral families.\\nThe authors did not explore practical black-box unlearning techniques.\\n\\n#### Recommendation:\\nThis paper makes a theoretical contribution to the field of LLM unlearning. It would be valuable to include empirical results and verification for both white-box and black-box models. The paper would also be more engaging if it included more references to related works.\", \"rating\": \"4\", \"confidence\": \"1\"}" ] }
oL806RzbDi
Temporally Sparse Attack for Fooling Large Language Models in Time Series Forecasting
[ "Fuqiang Liu", "Sicong Jiang" ]
Large Language Models (LLMs) have shown great potential in time series forecasting by capturing complex temporal patterns. Recent research reveals that LLM-based forecasters are highly sensitive to small input perturbations. However, existing attack methods often require modifying the entire time series, which is impractical in real-world scenarios. To address this, we propose a Temporally Sparse Attack (TSA) for LLM-based time series forecasting. By modeling the attack process as a Cardinality-Constrained Optimization Problem (CCOP), we develop a Subspace Pursuit (SP)--based method that restricts perturbations to a limited number of time steps, enabling efficient attacks. Experiments on advanced LLM-based time series models, including LLMTime (GPT-3.5, GPT-4, LLaMa, and Mistral), TimeGPT, and TimeLLM, show that modifying just 10\% of the input can significantly degrade forecasting performance across diverse datasets. This finding reveals a critical vulnerability in current LLM-based forecasters to low-dimensional adversarial attacks. Furthermore, our study underscores the practical application of CCOP and SP techniques in trustworthy AI, demonstrating their effectiveness in generating sparse, high-impact attacks and providing valuable insights into improving the robustness of AI systems.
[ "Large Language model", "time series forecasting", "adversarial attack" ]
Accept
https://openreview.net/pdf?id=oL806RzbDi
https://openreview.net/forum?id=oL806RzbDi
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "jfZzDWQF6s", "Kkq8bRgtmT", "6c1wav0wNf" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741099943005, 1740507649047, 1740702456225 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission112/Reviewer_uycS" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission112/Reviewer_UpiJ" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Temporally Sparse Attack for Fooling Large Language Models in Time Series Forecasting\", \"review\": [\"**Pros**:\", \"The experimental evaluation is comprehensive.\", \"The logics and explanation is clear to me.\", \"**Cons**:\", \"From my perspective, the proposed attack is basically a gradient-based optimization with sparsity constraints. The novelty is not enough\", \"The writing need to improve. For example, in experimental part, the content is not well-organized due to too many subsections.\", \"The font size of the figures is not large enough.\"], \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"This paper presents a temporally sparse adversarial attack against LLM-based time series forecasting, revealing critical vulnerabilities but lacking discussion on attack transferability and defense strategies.\", \"review\": \"This work introduces Temporally Sparse Attack (TSA), an adversarial method targeting LLM-based time series forecasting models by modifying only a small fraction of input data. Its very clear the topic here is very relevant to this workshop as well as important considering the current industry scenario. The attack is formulated as a Cardinality-Constrained Optimization Problem (CCOP) and solved using Subspace Pursuit (SP), ensuring minimal yet high-impact perturbations, This looks promising. The paper is well-structured with strong theoretical grounding, rigorous empirical validation (Sign of a good paper) , and a clear black-box threat model which i really liked and appreciate that authors didn't left it out. The results convincingly show that perturbing just 10% of the input can significantly degrade performance on LLMTime (GPT-3.5, GPT-4, LLaMa, Mistral), TimeGPT, and TimeLLM, highlighting a fundamental weakness in LLM-based forecasting. However, the study does not explore attack transferability to unseen models, evaluate computational efficiency, or propose meaningful defense mechanisms. A deeper analysis of these aspects would significantly improve the impact of this work.\\n\\nStrengths\\n1. Novel sparse adversarial attack with strong empirical validation\\n2. Rigorous black-box formulation using CCOP and SP\\n3. Reveals critical security risks in LLM-based forecasting\\n4. Well-structured experimental design\\n\\nWeaknesses\\n1. Lack of discussion on attack transferability\\n2. Computational efficiency of SP-based attack remains unclear\\n3. Limited exploration of defense strategies\", \"rating\": \"8\", \"confidence\": \"5\"}" ] }
oCprwPRqwW
Finding Sparse Autoencoder Representations Of Errors In CoT Prompting
[ "Justin Theodorus", "V Swaytha", "Shivani Gautam", "Adam Ward", "Mahir Shah", "Cole Blondin", "Kevin Zhu" ]
Current large language models often suffer from subtle, hard-to-detect reasoning errors in their intermediate chain-of-thought (CoT) steps. These errors include logical inconsistencies, factual hallucinations, and arithmetic mistakes, which compromise trust and reliability. While previous research focuses on mechanistic interpretability for best output, understanding and categorizing internal reasoning errors remains challenging. The complexity and non-linear nature of these CoT sequences call for methods to uncover structured patterns hidden within them. As an initial step, we evaluate Sparse Autoencoder (SAE) activations within neural networks to investigate how specific neurons contribute to different types of errors.
[ "mechanistic interpretability", "chain-of-thought prompting", "sparse autoencoders", "large language models" ]
Accept
https://openreview.net/pdf?id=oCprwPRqwW
https://openreview.net/forum?id=oCprwPRqwW
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "g6BnK3GPYG", "9WV2mgxmyB", "3Va0DX3kwe", "1Od1BBKBou" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1740924092699, 1740988258722, 1740899228962, 1740904767296 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission152/Reviewer_MiSp" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission152/Reviewer_V2hK" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission152/Reviewer_TBEM" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"This study remains exploratory rather than providing actionable insights for improving LLM trust.\", \"review\": \"The paper touches on an important topic\\u2014trust and reliability in LLM reasoning steps\\u2014but falls short in execution. While the use of SAEs for interpretability is an interesting direction, the study lacks empirical rigor, making it hard to assess the method\\u2019s effectiveness. The authors fail to show that SAEs provide meaningful decompositions of reasoning errors. They also do not compare against other interpretability techniques. To make this work more compelling, future research should also test whether identified sparse directions can actually be used for model correction.\", \"rating\": \"4\", \"confidence\": \"3\"}", "{\"title\": \"Applying Sparse Autoencoders to Uncover Error Patterns in Chain-of-Thought Reasoning\", \"review\": \"The paper \\\"Finding Sparse Autoencoder Representations of Errors in CoT Prompting\\\" presents a novel methodology for analysing reasoning errors in Large Language Models (LLMs) during chain-of-thought (CoT) generation using Sparse Autoencoders (SAEs). The authors manually labelled 1,000 CoT responses from the GSM8K dataset into nine distinct error categories and extracted SAE features from Gemma-2B's activations. Their aim was to uncover structured patterns in reasoning failures.The work bridges the gap between mechanistic interpretability and practical debugging of LLMs. It focuses on error-specific feature directions in activation space rather than solely on final model outputs.\\n\\nStrengths\\n\\nThe study's primary strength lies in its innovative application of SAEs to the understudied problem of reasoning error analysis.While SAEs have been explored in mechanistic interpretability, their use for categorizing CoT errors represents a fresh direction with practical implications for improving model transparency. The integration of ROSCOE's error taxonomy provides a structured framework for labelling, and the three-step methodology (error labeling, SAE activation extraction, and correlation analysis) is logically coherent and reproducible.The interdisciplinary approach\\u2014combining error taxonomies from prior work with SAE tools like SAELens\\u2014demonstrates effective synthesis of existing techniques. Furthermore, the emphasis on intermediate reasoning steps, as opposed to final conclusions, is in alignment with the mounting demand for interpretability in AI systems.\\n\\nWeaknesses\\n\\nNotwithstanding the encouraging framework, the paper is currently lacking in empirical validation, thus leaving key questions unanswered. These include the question of whether SAE features reliably correlate with error types, and how they compare to alternative methods. The absence of quantitative results means that the paper's conclusions are not supported by evidence.The sample size of 1,000 CoT responses may be insufficient to capture rare error types (e.g. arithmetic mistakes), and the manual labelling process introduces potential annotation bias, which is not mitigated by inter-annotator agreement metrics. The paper does not provide sufficient technical details about the SAE architecture (e.g. sparsity constraints, training data, layer selection rationale), which limits the ability of the reader to reproduce the results. Furthermore, the paper does not benchmark SAE-based analysis against simpler baselines like linear probes or PCA, leaving its comparative advantage unclear.\\n\\nOriginality and Significance\\n\\nThe work is original in its objective of mapping error categories to sparse feature directions, a direction that has received comparatively little attention in the context of interpretability research. While ROSCOE and related studies focus on evaluating correctness, this study explicitly links internal activations to reasoning failures, thus offering a pathway for targeted debugging. If validated, this approach has the potential to significantly enhance tools for diagnosing and mitigating CoT errors, thereby contributing to more reliable and transparent LLMs.\\n\\nClarity\\n\\nThe paper is generally well-structured, with a clear motivation and methodology. However, there is a lack of depth in the technical sections (e.g. SAE setup, correlation analysis) to support replication.Critical components, such as Table 1 (error definitions), are referenced but missing from the submitted text, which disrupts readability.Additionally, the discussion of SAE training and activation normalization is overly brief, leaving ambiguities about implementation choices.\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"Official Review of Submission152\", \"review\": \"# Summary\\nThe paper proposes a novel approach for diagnosing internal chain-of-thought reasoning errors in large language models. By leveraging sparse autoencoder representations, the authors extract interpretable features from model activations. They correlate these sparse features with error types from manual annotations, establishing a framework for systematic CoT error analysis and interpretability improvements. \\n# Strengths \\nThe methods and experiment procedures are clearly described.\\n# Weakness and Questions \\n1. Would you mind sharing any initial experimental results? \\n2. Could you kindly elaborate on how you evaluate the effectiveness of your method, including the specific metrics or the testing procedure?\", \"rating\": \"5\", \"confidence\": \"3\"}" ] }
nVgHZ4WcnJ
HLogformer: A Hierarchical Transformer for Representing Log Data
[]
Transformers have gained widespread acclaim for their versatility in handling diverse data structures, yet their application to log data remains underexplored. Log data, characterized by its hierarchical, dictionary-like structure, poses unique challenges when processed using conventional transformer models. Traditional methods often rely on manually crafted templates for parsing logs, a process that is labor-intensive and lacks generalizability. Additionally, the linear treatment of log sequences by standard transformers neglects the rich, nested relationships within log entries, leading to suboptimal representations and excessive memory usage. To address these issues, we introduce HLogformer, a novel hierarchical transformer framework specifically designed for log data. HLogformer leverages the hierarchical structure of log entries to significantly reduce memory costs and enhance representation learning. Unlike traditional models that treat log data as flat sequences, our framework processes log entries in a manner that respects their inherent hierarchical organization. This approach ensures comprehensive encoding of both fine-grained details and broader contextual relationships. Our contributions are threefold: First, HLogformer is the first framework to design a dynamic hierarchical transformer tailored for dictionary-like log data. Second, it dramatically reduces memory costs associated with processing extensive log sequences. Third, comprehensive experiments demonstrate that HLogformer more effectively encodes hierarchical contextual information, proving to be highly effective for downstream tasks such as synthetic anomaly detection and product recommendation.
[ "Anomaly Detection", "Hierarchical Transformers", "Efficient Transformers" ]
Reject
https://openreview.net/pdf?id=nVgHZ4WcnJ
https://openreview.net/forum?id=nVgHZ4WcnJ
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "zmrKdDn4RV", "u3gIcc8XvC", "pwn6Umjh9k", "i4nUiH9gIu" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741054395437, 1739692371892, 1740880986104, 1740697802552 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission16/Reviewer_kQy2" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission16/Reviewer_HPDT" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission16/Reviewer_eMUN" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"review from kQy2\", \"review\": \"The paper presents a sound approach to handling log data using a hierarchical transformer. The overall quality is solid, with extensive experimental evaluations across multiple datasets and tasks, which addresses both memory inefficiencies and the loss of contextual hierarchy. The pros and cons are listed below:\\n\\n---\\n**Pros**:\\n* The focus on log data is interesting, and the plug-in nature of HLogformer for different backbones shows innovation.\\n* Tailored transformer design that leverages the inherent hierarchy of log data. \\n* Demonstrated reduction in memory consumption and improved performance across various tasks.\\n\\n**Cons**:\\n* The paper sometimes reads as an incremental extension of existing hierarchical transformer ideas rather than a groundbreaking departure. It would be interesting to see more discussion on how this approach fundamentally differs from other hierarchical models. Furthermore, I'm curious about what the fundamental difference between log data and some other structured data like table, graph, etc., is. To me, some other methods that have shown good performances on table/graph may also work for this log data as sometimes this structured data is also represented as a hierarchical tree. The authors may provide more baselines or analysis for clarifying this issue to highlight the novelty of this paper.\\n* The bidirectional summary passing mechanism from lines 238 to 243 is not clear: what is the new proposed bidirectional summary passing technique? and how does this mechanism impact performance, especially in comparison to standard approaches?\\n* How would HLogformer perform on logs with less regular or evolving structures? I would like to know more about the authors' insights on how this method can be applied to real applications and what type of cases could benefit from this work?\\n\\nOverall, while the paper makes a valuable contribution, clarifying some of its technical aspects and expanding the discussion on limitations would strengthen the work.\\n\\n----\\nI have some further concerns regarding the topic of this paper; it may not fit the requirements of this workshop. As it mainly focuses on proposing a sound method for using a hierarchical transformer to handle log data, while the workshop's purpose is mainly to build trust in language models **(requires ACs to take a look at this issue)**.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"The paper proposes a novel hierarchical transformer that better represents nested log data by exploiting its tree structure, thereby reducing memory usage and improving downstream tasks like anomaly detection and recommendations compared to transformers for log data.\", \"review\": \"This paper proposes HLogformer, a hierarchical transformer tailored for dictionary-like, nested log data. The authors identify a gap in how conventional transformers handle logs: treating them as linear sequences and consuming excessive memory. They propose a model that exploits the hierarchical structure of logs to reduce complexity and preserve context.\\n\\nThis work provides a well-reasoned solution for a notable domain problem. The discussion of how log entries can be viewed as a tree structure instead of a flat sequence makes sense, and it is applied in the paper with a step-by-step hierarchical summary framework. The clarity of the text is also good. The authors carefully motivate the challenge of dealing with dictionary-like data, show how they derive a hierarchical tree representation, and then detail their transformer architecture that processes segments in a way that reflects this inherent nesting. It is a logical progression, and I seldom felt lost while reading. The originality of the approach lies in how the authors use progressive summaries that pass from lower-level segments of the log to higher-level segments, ensuring that both fine-grained and broad contextual cues can be captured.\\n\\nPros include the strong argument about the dictionary-like structure in logs, the consistent demonstration of memory savings, and the application of the approach to multiple tasks (anomaly detection, recommendation, etc.) to prove wide coverage. Another plus is that it fits with a variety of frameworks, so it is generally reusable as a plugin for hierarchical log representation. As for con, I noticed that the text occasionally lacks further elaboration on hyperparameter tuning or training details that might influence reproducibility.\\n\\nOne thing I'm less certain about the fit of HLogformer with the topic of the workshop, which centers around methods to build trust in Large Language Models, while this work, albeit interesting, focuses more on proposing a variants of transformer for a different data modality.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Out of Scope of the Workshop\", \"review\": \"The authors propose a new transformer architecture, HLogFormer, capable of dealing with log data more efficiently and accurately than standard linear transformers.\\n\\nThe paper itself is interesting, and the HLogFormer can clearly have some targeted use in the real world. However, I fail to see how the paper aligns with the scope of the workshop. The paper focuses on developing a new transformer technique designed specifically to handle log data. This is, however, not a contribution on the trustworthiness of LLMs.\\n\\nMy rating is not a comment on the quality of the paper, but instead, on the lack of any connection between the scope of the workshop and the contributions of this paper. \\nGiven the paper is completely out of scope of the workshop, I don't believe the reviewers should be expected to have any further comments on this paper.\", \"rating\": \"3\", \"confidence\": \"4\"}" ] }
nFkL7qAirE
AI Companions Are Not The Solution To Loneliness: Design Choices And Their Drawbacks
[ "Jonas B Raedler", "Siddharth Swaroop", "Weiwei Pan" ]
As the popularity of social AI grows, so has the number of documented harms associated with its usage. Drawing on Human-Computer Interaction (HCI) and Machine Learning (ML) literature, we frame the harms of AI companions as a $\textit{technological problem}$ and draw direct links between key technical design choices and risks for users. We argue that many of the observed harms are foreseeable and preventable consequences of these choices. In the spirit of $\textit{translational research}$, we offer concrete strategies to mitigate these harms through both regulatory and technical interventions, aiming to make our findings useful and actionable for policymakers and practitioners.
[ "Social AI", "AI governance", "Ethical AI" ]
Accept
https://openreview.net/pdf?id=nFkL7qAirE
https://openreview.net/forum?id=nFkL7qAirE
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "EaE3WQjDAS", "AL0ssGnqGy", "5Vclo0j8Mg" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741076132396, 1740765644046, 1740861951517 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission88/Reviewer_xAPf" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission88/Reviewer_cmQF" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"comment\": \"This paper highlights how AI design choices impact vulnerable users, particularly lonely individuals, and emphasizes the risks of AI companions. It offers valuable insights for LLM researchers, suggesting that many risks are inherent by design but can be mitigated.\", \"title\": \"Paper Decision\"}", "{\"title\": \"A good paper\", \"review\": \"The paper puts into perspective the risks of AI companions for vulnerable populations and how most of them are by design and could be mitigated. I think this paper is very nice and insightful. I cannot give a higher score only because I am not familiar enough with the literature on this subject to say if it is an exceptional good paper.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"Review\", \"review\": \"This paper is an extremely important and well-written contribution that highlights how technological design choices we (AI researchers) make are directly related to downstream risks and harms for vulnerable users, specifically lonely people. I believe every LLM researcher building a system intended to be used by a human would benefit from reading this paper.\", \"rating\": \"9\", \"confidence\": \"4\"}" ] }
moLzTzk0uu
Evaluating Text Humanlikeness via Self-Similarity Exponent
[ "Ilya Pershin" ]
Evaluating text generation quality in large language models (LLMs) is critical for their deployment. We investigate the self-similarity exponent S, a fractal-based metric, as a metric for quantifying "humanlikeness." Using texts from the public available dataset and Qwen models (with/without instruction tuning), we find human-written texts exhibit S = 0.57, while non-instruct models show higher values, and instruct-tuned models approach human-like patterns. Larger models improve quality but benefit more with instruction tuning. Our findings suggest S as an effective metric for assessing LLM performance.
[ "humanlikeness", "self-similarity exponent", "fractal structure of language" ]
Accept
https://openreview.net/pdf?id=moLzTzk0uu
https://openreview.net/forum?id=moLzTzk0uu
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "ovTatHbq0t", "Z8PVWPnbZF", "RTV5AFUYfk" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1740983285034, 1741055717918, 1740741271136 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission66/Reviewer_NSFe" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission66/Reviewer_sSwu" ] ], "structured_content_str": [ "{\"title\": \"While the approach of this paper (using the self-similariy exponent as a metric for evaluating the human-likeness of machine-generated text) is novel and in scope of the workshop, more invest5igations into its applicability and usefulness is necessary.\", \"review\": \"This paper presents an interesting contribution to evaluating the human-likeness of LLM-generated text using the self-similarity exponent $S$, a fractal-based metric. The results effectively demonstrate that instruction tuning significantly improves alignment with human text patterns, leveraging $S$. However, the study has several limitations, as the real world interpretability and practical usability remain unclear of this metric. How does this metric compare to more traditional evaluation methods such as perplexity or human annotation in assessing trustworthiness? Additionally, investigating how $S$ correlates with human perceptions of text quality could help establish its real-world relevance.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Good empirical validation, but the experimental setup is limited\", \"review\": \"**Summary**\\n\\nThis paper suggests using the self-similarity exponent S to assess the humanlikeness of text generated by large language models. Experiments compare human-written texts from the MAGE dataset with outputs from different variants of the Qwen model. The findings show that human texts exhibit S \\u2248 0.57, while non-instruction-tuned models yield higher S values. Notably, instruction tuning significantly lowers S, bringing machine-generated texts closer to human-like patterns.\\n\\n**Strengths**\\n* **Empirical Validation**: By comparing different model variants (instruction-tuned vs. non-instruction-tuned) and relating S to human-written texts, the study provides compelling evidence that instruction tuning can improve humanlikeness.\\n\\n**Weaknesses**\\n* **Limited Model Scope**: The experiments are confined to the Qwen family of models. Evaluating S across a broader range of language models would strengthen the claim of its general applicability.\\n* **Influence of generation parameters**: I believe that the humanlikeness of model generation depends on the sampling parameters (temperature, top p, top k, number of beams, etc.). The article does not specify which parameters were used in the study. It would also be interesting to look at the metric values with other values.\\n* **Comparative Analysis**: The paper could benefit from a more extensive comparison with other evaluation metrics to highlight the unique advantages and potential limitations of using S.\\n\\n**Questions**\\n* Could you tell us about calculating self-similarity exponent S using some toy example so that the algorithm is better understood?\", \"rating\": \"3\", \"confidence\": \"2\"}" ] }
mkp0GPTGx6
Towards Neural No-Resource Language Translation: A Comparative Evaluation of Approaches
[]
No-resource languages—those with minimal or no digital representation—pose unique challenges for machine translation (MT). Unlike low-resource languages, which rely on limited but existent corpora, no-resource languages often have fewer than 100 sentences available for training. This work explores the problem of no-resource translation through three distinct workflows: fine-tuning of translation-specific models, in-context learning with large language models (LLMs) using chain-of-reasoning prompting, and direct prompting without reasoning. Using Owens Valley Paiute as a case study, we demonstrate that no-resource translation demands fundamentally different approaches from low-resource scenarios, as traditional approaches to machine translation, such as those that work for low-resource languages, fail. Empirical results reveal that, although traditional approaches fail, the in-context learning capabilities of general-purpose large language models enable no-resource language translation that outperforms low-resource translation approaches and rivals human translations (BLEU 0.45-0.6); specifically, chain-of-reasoning prompting outperforms other methods for larger corpora, while direct prompting exhibits advantages in smaller datasets. As these approaches are language-agnostic, they have potential to be generalized to translation tasks from a wide variety of no-resource languages without expert input. These findings establish no-resource translation as a distinct paradigm requiring innovative solutions, providing practical and theoretical insights for language preservation.
[ "Computational Linguistics", "Machine Translation", "Fine Tuning", "Large Language Models (LLMs)" ]
Reject
https://openreview.net/pdf?id=mkp0GPTGx6
https://openreview.net/forum?id=mkp0GPTGx6
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "ooZ27ZDGVU", "jcQAuBncDa", "FYftKLzdp2", "AJcTuxaDRs" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1741028872386, 1740957745957, 1739896007816, 1741109789891 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission80/Reviewer_kKXF" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission80/Reviewer_MgCR" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission80/Reviewer_5U7c" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Interesting application, a bit difficult to read\", \"review\": \"The authors investigate strategies for evoking translation of no-resource languages via decoder-only models. The language of interest is Owens Valley Paiute. The authors explore low-resource translation of this language via fine-tuning and ICL/chain-of-thought with PaLM. The corpus consists of 100 sentences, entering a regime which the authors define as \\\"no-resource.\\\" Chain-of-thought shows promising BLEU scores on held-out translations.\\n\\nThe paper is difficult to read mostly because it's not clear whether 100-\\\"words\\\" or 100-\\\"phrases\\\" are used. The authors frequently switch between these terminologies (such as in section 4.1). The provided dataset link is dead. Since some sentences (or words?) are included in the ICL, naturally, this changes the interpretability of the BLEU score as the validation set is changing in size. The authors should have simply held out 20-25 sentences across all settings and then vary the amount of examples given. (If this is already occurring, it's not clear.). Given the limited size of the corpus, it would have been interesting to see how full fine-tuning affected performance vs QLoRA. \\n\\nI prefer to accept this paper as it fits the theme of the workshop and the machine translation community rarely considers ultra-low/no-resource language at this scale, especially using training-free strategies. If this is accepted, I encourage the author to compare with decipherment, which similarly tries to achieve translation with no/limited parallel corpuses (see Ravi and Knight, \\\"Deciphering Foreign Language.\\\" ACL, 2011 -- these two authors have a series of seminal works on this topic.)\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Towards Neural No-Resource Language Translation: A Comparative Evaluation of Approaches\", \"review\": \"This paper investigates how neural LLMs handle translation for languages with no direct training data. The authors compare different approaches, including fine-tuning, chain-of-reasoning prompting, and direct prompting, to evaluate their effectiveness. The study aims to understand how LLMs generalize translation patterns for no-resource languages and whether intermediate languages play a role in translation performance.\\n\\nStrengths\", \"novel_problem_focus\": \"The paper addresses an important and underexplored issue\\u2014translation for truly no-resource languages, which is critical for language preservation.\", \"comprehensive_comparison\": \"The study evaluates multiple translation strategies, providing a well-rounded assessment of different techniques.\", \"well_defined_experiments\": \"The methodology is clear, with a structured evaluation framework using LLM-based approaches.\", \"useful_insights\": \"The findings reveal key differences between fine-tuning, chain-of-reasoning prompting, and direct prompting, offering valuable takeaways for future research.\\n\\nWeaknesses\", \"limited_language_diversity\": \"The study focuses on a small set of languages, making it unclear whether the findings generalize to all no-resource languages.\", \"lack_of_dataset_analysis\": \"The paper does not explore whether LLMs leverage indirect exposure to target languages through pretraining data.\", \"no_practical_implementation\": \"While the study identifies effective methods, it does not discuss how these approaches could be deployed in real-world translation systems.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"This paper addresses the underexplored challenge of translating no-resource languages (fewer than 100 documented phrases) using neural methods. It rigorously evaluates three approaches: fine-tuning, chain-of-reasoning prompting, and direct prompting, with Owens Valley Paiute as a case study. The results demonstrate that traditional fine-tuning fails, while chain-of-reasoning prompting with large language models (LLMs) achieves human-level translation quality (BLEU 0.45-0.6), outperforming direct prompting for larger corpora. The work is original, well-structured, and significant for language preservation, though it could improve by expanding language coverage, exploring synthetic data generation, and addressing ethical concerns. Overall, it provides a strong foundation for future research in no-resource translation.\", \"review\": \"The paper is well-structured and methodologically sound. It addresses a significant gap in machine translation (MT) research by focusing on no-resource languages, which are often overlooked in favor of low-resource languages. The authors provide a clear definition of no-resource languages (fewer than 100 documented phrases) and rigorously evaluate three distinct approaches: fine-tuning, chain-of-reasoning prompting, and direct prompting. The empirical results are well-documented, with comprehensive metrics (BLEU, ROUGE, TER, METEOR) used to evaluate translation quality. The use of Owens Valley Paiute as a case study is appropriate, given its status as a no-resource language.\\n\\nHowever, the paper could benefit from a more detailed discussion of the limitations of the study. For instance, the evaluation is limited to one language, and the generalizability of the findings to other no-resource languages is not thoroughly explored. Additionally, while the authors mention the potential for synthetic data generation, they do not provide concrete experiments or results in this direction.\\n\\nThe paper is generally clear and well-written. The introduction provides a strong motivation for the study, and the methodology is described in sufficient detail to allow for replication. The use of figures (e.g., Figures 1, 2, and 3) effectively illustrates the comparative performance of the different approaches. The discussion section is particularly strong, offering insights into the theoretical implications of the findings and suggesting future directions.\\n\\nOne area for improvement is the clarity of the experimental setup. While the authors describe the fine-tuning process and prompting methods, more details on the specific prompts used for chain-of-reasoning and direct prompting would be helpful. Additionally, the paper could benefit from a clearer explanation of why fine-tuning fails in no-resource scenarios, as opposed to low-resource ones.\\n\\nThe paper is highly original in its focus on no-resource languages, a niche but important area in MT research. The authors make a compelling case that no-resource translation is fundamentally different from low-resource translation and requires novel approaches. The use of chain-of-reasoning prompting is particularly innovative, leveraging the emergent reasoning capabilities of large language models (LLMs) to infer translations from minimal data.\\n\\nThe originality of the work is further highlighted by its departure from traditional rule-based or data-augmentation approaches, which are ineffective for no-resource languages. Instead, the authors explore purely neural methods, demonstrating that LLMs can achieve human-level translation quality without extensive linguistic rules or large corpora.\\n\\nThe study also contributes to the broader field of MT by challenging the assumption that large corpora are necessary for effective translation. The success of chain-of-reasoning prompting suggests that general-purpose reasoning capabilities in LLMs may supplant domain-specific fine-tuning in extreme low-resource scenarios.\\n\\nPros\", \"novel_focus\": \"The paper addresses a critical gap in MT research by focusing on no-resource languages, which are often neglected in favor of low-resource languages.\", \"innovative_methods\": \"The use of chain-of-reasoning prompting is a novel approach that leverages the emergent reasoning capabilities of LLMs to infer translations from minimal data.\", \"strong_empirical_results\": \"The paper provides robust empirical evidence that chain-of-reasoning prompting outperforms traditional fine-tuning and direct prompting methods, achieving human-level translation quality.\", \"theoretical_contributions\": \"The study contributes to the theoretical understanding of no-resource translation, highlighting the limitations of traditional MT methods and the potential of LLMs in linguistically diverse scenarios.\", \"practical_implications\": \"The findings have significant practical implications for language preservation, particularly for endangered languages with minimal digital representation.\\n\\nCons\", \"limited_generalizability\": \"The study is based on a single no-resource language (Owens Valley Paiute), and the generalizability of the findings to other languages is not thoroughly explored.\", \"lack_of_synthetic_data_experiments\": \"While the authors mention the potential for synthetic data generation, they do not provide concrete experiments or results in this direction.\", \"fine_tuning_failure_analysis\": \"A more detailed analysis of why fine-tuning fails in no-resource scenarios would provide valuable insights into the limitations of traditional MT methods.\", \"prompt_clarity\": \"The specific prompts used for chain-of-reasoning and direct prompting are not provided, which could limit the reproducibility of the study.\", \"ethical_considerations\": \"The paper should address potential ethical concerns related to the translation of endangered languages, including the risk of misrepresentation or cultural insensitivity.\", \"expand_language_coverage\": \"Future work should include experiments with multiple no-resource languages from different linguistic families to validate the generalizability of the findings.\", \"synthetic_data_experiments\": \"The authors should explore the use of synthetic data generation to expand the corpus size and improve translation performance.\", \"detailed_prompt_examples\": \"Providing specific examples of the prompts used for chain-of-reasoning and direct prompting would enhance the clarity and reproducibility of the study.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"comment\": \"This paper explores no-resource language translation using LLMs by comparing different approaches. Despite positive reviews, the paper is not aligned with the focus of the Building Trust in LLMs workshop, as it primarily addresses language preservation and translation strategies rather than trust, safety, or alignment in LLMs. Given its low relevance to the workshop, I recommend rejection.\", \"title\": \"Paper Decision\"}" ] }
mLzUAoYBbs
Hidden No More: Attacking and Defending Private Third-Party LLM Inference
[ "Arka Pal", "Rahul Krishna Thomas", "Louai Zahran", "Erica Choi", "Akilesh Potti", "Micah Goldblum" ]
Recent advances in Large Language Models (LLMs) have led to widespread adoption of third-party inference services, raising critical privacy concerns. In this work, we introduce a novel reconstruction technique that can recover original prompts from hidden states with nearly perfect accuracy across multiple state-of-the-art LLMs in the increasingly important open-weights setting. Although the attack is conceptually simple, it has not -- to the best of our knowledge -- previously been described nor shown to work practically. Furthermore, our attack remains effective against various permutation and noise-based defenses, challenging assumptions about the security of previously proposed schemes. To address these vulnerabilities, we propose Cascade, a multi-party inference scheme that leverages sharding in the sequence dimension to retain privacy of the user input. Through theoretical analysis and empirical evaluation, we demonstrate that Cascade is secure against both our attack as well as previous methods, while maintaining computational and communication efficiency. Our findings highlight the importance of rigorous security analysis in privacy-preserving LLM inference and offer practical solutions for secure deployment.
[ "LLM", "Security", "Privacy", "Trust" ]
Accept
https://openreview.net/pdf?id=mLzUAoYBbs
https://openreview.net/forum?id=mLzUAoYBbs
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "tDpmR8skoX", "sgn2QJX9a9", "dYgHo3jI1a", "I7UEl2jU54" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740504764798, 1741104215596, 1739709098135, 1740703293999 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission144/Reviewer_zvkW" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission144/Reviewer_SKau" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission144/Reviewer_rJck" ] ], "structured_content_str": [ "{\"title\": \"Interesting topic, presentation could be improved.\", \"review\": [\"In general, I find the addressed topic and the paper interesting and could be a good addition to the workshop. However, I have certain concerns that I strongly advise the authors to address before they make any copy of the manuscript public:\", \"The basic ideas behind the attack presented in Sections 4 and 5 are in principle similar to the attack in [1], while [1] deals with the more difficult case of gradient inversion. While I believe that the ideas can be independently discovered, as they are relatively simple, relation to prior work still has to be discussed.\", \"In general, a stronger connection to gradient inversion and model inversion literature should be established. I understand that this is an inference-time attack, while those are trianing-time attacks, certain techniques, as seen above, translate between these attack types.\", \"The significance of the $\\\\epsilon$ threshold in the matching procedure is unclear. I wonder why simply taking the min L1 distance would not be sufficient. This is anyway what the algorithm falls back to if no matches within $\\\\epsilon$ distance are found. Also, if the $\\\\epsilon$ match is unique, it is then necessarily the min.\", \"Regarding the proposed defense; if I understood it correctly, there is still a node that has access to all logits in the end. In this way, it might be possible to apply a similar auto-regressive enumaration attack as the one presented in Sections 4 and 5, but instead of looking at the hidden state, looking at the logits. The attacker here could be the owner of this node, and as we assume that the attacker has access to the full weights of the model, they could run the model in shadow to match the logits.\"], \"some_higher_level_comments\": [\"In general, I find the structuring of the paper rather confusing. Method-related and experimental sections are intermixed, and experimental results are embedded into technique presentations. The paper would benefit from a clear separation of technical and experimental sections. The experiments themselves would also benefit from a more extensive introduction, clearly setting up the experimental paramaters.\", \"The threat model has to be also clarified better. It is unclear what role the fact plays that the model is open-source. Naturally, any hosting service will have access to the model weights, no matter if those are publicly accessable or not. It is also unclear why such hosting services would be willingly giving up parts of their inference infrastructure and share inference it with other parties. Also, why could these parties not collude and exchange information about the hidden states on their nodes? Finally, it would also help the clarity of the threat model if the authors would discuss related hardware-level confidentiality techniques, i.e., trusted execution environments.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Simple yet effective prompt reconstruction attack\", \"review\": [\"# Summary\", \"The paper proposes a very simple reconstruction technique that can almost completely recover the input prompt from hidden states.\", \"It demonstrates that the proposed method is robust against existing defenses such as Secure Multi-Party Computation (SMPC) and introduces a new sharding-based multi-party scheme called Cascade.\", \"# Strength\", \"**Practicality**: The paper shows attention to practical details, such as implementing fuzzy matching to account for noise that may occur in computer systems. It is interesting that the introduced $\\\\epsilon$ does not affect the attack success rate.\", \"**Extensive Experiments**: The method is shown to function not only in simple settings but also in cases where mechanisms like permutation, noise, and quantization act as defenses.\", \"**Defense**: The paper thoroughly organizes the features and drawbacks of existing defenses and proposes Cascade as a defense to neutralize the proposed attack. The analysis of Cascade (Section 7.2) is also extensive.\", \"# Comments, Weakness\", \"**Intuitive Understanding**: Visual explanations or concrete examples of what is happening during reconstruction might aid in understanding.\", \"**Runtime**: Table 4 presents the measurement results using BERT, but can it scale to more recent, larger models (or multilingual models with a larger vocabulary)? Although 4.2.2 mentions one example (it takes < 30 seconds to reconstruct 50 token prompts), readers would benefit from rigorous and extensive experiments in this direction (e.g., the correlation between runtime and model size / target layer depth / vocab size / etc).\", \"Overall, the proposal is simple yet effective, with significant practical implications. If, as the authors claim, no similar attacks exist, this paper represents a substantial contribution.\"], \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"This paper reveals a potent new attack on hidden-state privacy and introduces a scalable multi-party token-sharding defense, Cascade, that resists existing reversal techniques and offers a practical alternative to heavy MPC-based solutions. It highlights the urgent need for robust privacy mechanisms in open-weights LLM inference.\", \"review\": \"This paper deserves an extensive review so here we go :\", \"quality\": \"This paper provides a thorough investigation of privacy vulnerabilities in third-party LLM inference under the open-weights setting. The authors introduce a simple yet powerful \\u201cvocab-matching\\u201d attack capable of recovering original user prompts with near-perfect accuracy, even when hidden states are permuted or injected with noise. Through extensive experiments on different state-of-the-art LLM architectures (Gemma-2-2B-IT, Llama-3.1-8B-Instruct) and hidden-layer settings, the paper demonstrates a broad range of successful attacks that defeat several existing defenses. The proposed defense, Cascade, is a token-sharding multi-party inference scheme, which trades off certain cryptographic guarantees in order to achieve significantly faster and more communication-efficient privacy-preserving inference. The paper\\u2019s security analysis includes both brute-force style and learning-based attacks, and the authors provide detailed cost (computation and communication) analyses relative to alternative MPC approaches. Overall, the methodology is well-structured, and the empirical evaluations are convincing.\", \"clarity\": \"The paper is clear in its exposition of the attack setup, threat model, and the design of Cascade. The theoretical details behind the vocab-matching attack and its variants (to handle permuted or noised hidden states) are spelled out systematically, and the step-by-step algorithms make the approach and experimental procedures quite transparent. Cascade\\u2019s architecture, while conceptually non-trivial, is broken down into discrete multi-party steps\\u2014pre-pass, attention-pass, and post-pass\\u2014making it easier to follow how the token-level partitioning is actually performed. A few points (e.g., real-world latency of Cascade, hardware differences in floating-point arithmetic) might benefit from additional elaboration, but these do not detract significantly from the overall clarity.\", \"originality\": \"The paper\\u2019s central contribution is a new line of attack\\u2014an efficient sequential vocab-matching procedure\\u2014specifically tailored to exploit the autoregressive structure of modern LLMs in an open-weights environment. While past work has explored embedding inversion and hidden-state reconstruction, few have shown near-perfect text recovery against sophisticated defenses such as permutation-based schemes and noise injection.\\nThe proposed Cascade protocol, leveraging token-wise partitioning across multiple untrusted nodes, is also novel in how it avoids the heavy overhead typical of full-fledged MPC.\", \"significance\": \"1. Attack Implications: The demonstration that hidden-state permutations or moderate noise do not suffice to hide user inputs underscores a critical vulnerability in certain existing \\u201clightweight\\u201d privacy solutions. This is directly relevant to the workshop\\u2019s focus on trust and safety in LLM applications, particularly in regulated domains (e.g. healthcare).\\n2. Defensive Contribution: Cascade offers a meaningful middle ground on the privacy\\u2013efficiency spectrum, which practitioners may find more practical than full-blown cryptographic protocols. The authors show strong empirical gains (over 100\\u00d7 faster than prior MPC methods on Bert-sized models), suggesting wide potential impact if integrated into real-world distributed inference pipelines.\\n\\nPros\\n1. High Attack Success: Vocab-matching recovers user prompts with remarkable accuracy under many transformations (permutation, noise, quantization) at multiple layers of large modern LLMs.\\n2. Detailed Evaluation: Exhaustive experiments cover permutations across different dimensions (sequence, hidden, factorized 2D) and various noise/quantization schemes, providing convincing evidence of the vulnerability.\\n3. Novel Defense: Cascade\\u2019s use of token sharding is presented as an efficient multi-party scheme that counters both brute-force and learning-based inversion attacks.\\n4. Extensive Analysis: The paper includes thorough theoretical discussion, security analysis, and comparisons with prior cryptographic protocols (quantitative metrics for runtime and communication).\\n\\nCons\\n1. Layer 0 Limitation: The paper concedes that if layer-0 embeddings are exposed in any form, token reconstruction is trivial. Cascade thus only achieves security from layer 1 onward, restricting its immediate deployment for fully securing all tokens.\\n2. No Formal Cryptographic Guarantee: Cascade\\u2019s defense is described in terms of sharding\\u2019s statistical obfuscation rather than a strict cryptographic proof. While this design choice is deliberate, it may leave room for stronger adaptive or combined attacks in the future.\\n3. Scalability Nuances: Real-world systems might see performance hits from high latency or unreliable nodes, and the paper\\u2019s evaluation largely assumes ideal parallel transport and bandwidth, warranting more real-network benchmarks.\\n4. Complex Parameter Tuning: Security depends on the choice of shard sizes (c, \\u03b4, \\u03b1, \\u03b2), which is left somewhat open-ended. This could pose an adoption barrier for users unfamiliar with multi-node distribution strategies.\", \"rating\": \"9\", \"confidence\": \"5\"}" ] }
mKfmLQXP6J
Conformal Structured Prediction
[ "Botong Zhang", "Shuo Li", "Osbert Bastani" ]
Conformal prediction has recently emerged as a promising strategy for quantifying the uncertainty of a predictive model; these algorithms modify the model to output sets of labels that are guaranteed to contain the true label with high probability. However, existing conformal prediction algorithms have largely targeted classification and regression settings, where the structure of the prediction set has a simple form as a level set of the scoring function. However, for complex structured outputs such as text generation, these prediction sets might include a large number of labels and therefore be hard for users to interpret. In this paper, we propose a general framework for conformal prediction in the structured prediction setting, that modifies existing conformal prediction algorithms to output structured prediction sets that implicitly represent sets of labels. In addition, we demonstrate how our approach can be applied in domains where the prediction sets can be represented as a set of nodes in a directed acyclic graph; for instance, for hierarchical labels such as image classification, a prediction set might be a small subset of coarse labels implicitly representing the prediction set of all their more fine-descendants. We demonstrate how our algorithm can be used to construct prediction sets that satisfy a desired coverage guarantee in several domains.
[ "Conformal Prediction", "Structured Prediction", "Integer Programming" ]
Accept
https://openreview.net/pdf?id=mKfmLQXP6J
https://openreview.net/forum?id=mKfmLQXP6J
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "uDw37UFtNP", "tjpgniNo3m", "tQ81EiqNcS", "UD3khn6NwM" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740304618199, 1741099615917, 1740808814485, 1740944440969 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission124/Reviewer_cmaJ" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission124/Reviewer_8GnJ" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission124/Reviewer_xcDJ" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"review\": [\"conformal prediction methods to more complex structured prediction tasks, such as code generation and question answering. The framework generates structured prediction sets, offering a compact and interpretable representation of uncertainty. Specifically, it models the prediction sets as subgraphs of a directed acyclic graph (DAG), which is particularly useful in tasks like code generation, where the prediction set can be represented as a partially completed program. The approach ensures reliable coverage guarantees, either marginal or PAC (probably approximately correct), and applies it to domains such as the SQuAD question answering dataset and Python code generation tasks. Experimental results demonstrate that the framework efficiently constructs small, interpretable prediction sets while maintaining high coverage rates and outperforming existing methods in terms of prediction set size.\", \"Strengths\", \"The framework provides robust coverage guarantees, ensuring high reliability in structured prediction tasks with interpretable uncertainty.\", \"By representing predictions as subgraphs of a directed acyclic graph (DAG), the method delivers smaller and more interpretable prediction sets, improving clarity.\", \"The approach outperforms existing methods in terms of prediction set size while maintaining high coverage rates, demonstrating its effectiveness in tasks like code generation and question answering.\", \"Weaknesses\", \"The method's use of directed acyclic graphs (DAGs) for prediction sets introduces additional complexity in both model training and implementation.\", \"While effective for structured prediction tasks, the approach may face challenges when applied to other types of prediction tasks outside the structured domain.\", \"Generating small and interpretable prediction sets may increase computational costs, especially for large-scale datasets or complex tasks.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Interesting research, some details need to be described\", \"review\": \"This paper introduces a novel framework for conformal structured prediction, which extends traditional conformal prediction techniques to handle complex structured outputs such as code generation and question answering. The authors propose a method that constructs structured prediction sets represented as subsets of directed acyclic graphs (DAGs), enabling uncertainty quantification in settings where the label space cannot be simply expressed as a collection of regression or classification outputs. The approach is validated on two tasks: date-based question answering using SQuAD and Python code generation from the MBPP dataset.\", \"strengths\": [\"The proposed framework addresses an important gap in conformal prediction by extending it to structured prediction problems. This is particularly relevant for applications involving hierarchical labels or complex outputs like programs.\", \"The algorithm can be applied to various domains with structured outputs, including text and code generation, and provides both marginal and PAC (Probably Approximately Correct) coverage guarantees.\", \"The authors conduct experiments on two distinct tasks, demonstrating the effectiveness of their approach in constructing prediction sets that satisfy desired coverage guarantees while maintaining reasonable sizes.\"], \"weaknesses\": [\"While the integer programming formulation for computing structured prediction sets is elegant, its computational cost is not thoroughly discussed. For large DAGs or high-dimensional label spaces, solving this optimization problem may become prohibitively expensive.\", \"Although the authors compare their approach to a baseline adapted from Khakhar et al. (2023), the discussion could benefit from comparisons with other state-of-the-art methods for uncertainty quantification in structured prediction tasks. Additionally, more details about the baseline implementation would help clarify its strengths and limitations relative to the proposed method.\", \"The sensitivity of the hyperparameters m, \\u03f5, and \\u03b4 is analyzed qualitatively but lacks deeper exploration. A more comprehensive study of how these parameters interact and influence the trade-off between coverage and prediction set size would enhance the practical utility of the framework.\", \"The paper makes a valuable contribution by proposing a flexible and theoretically grounded framework for conformal structured prediction. It successfully demonstrates the ability to construct interpretable prediction sets for complex outputs while satisfying coverage guarantees. However, addressing the computational complexity, expanding the scope of experiments, and providing a more thorough comparison with existing methods would further solidify the impact of this work. With these improvements, the paper has the potential to advance the field of uncertainty quantification for structured prediction tasks.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Nice paper extending conformal prediction to structured label spaces.\", \"review\": \"The paper extends conformal prediction (CP) to structured label spaces, providing marginal coverage and PAC-style guarantees. The framework is described in sufficient detail along with a practical algorithm to estimate thresholds for conformal prediction. Empirical evaluation on the SQUaD dataset shows its practicality and effectiveness.\\n\\nExtending CP to structured prediction settings is an important direction and it can be useful in LLMs, especially when dealing with structured outputs like code, reasoning traces, etc. I have the following feedback to potentially improve the paper,\\n\\n1. It is not clear how the structure of the space is taken into account. There are distances between the nodes in a DAG or any other structured space, where are they playing a role in CP?\\n\\n2. The paper dives into technical details and loses touch with the application/example presented in Figure 1. It would help if the mathematical details and notations are also explained/introduced more clearly with examples. \\n\\n3. Please clarify if you are outputting a set of graphs or a set of nodes from the graph and how do you get scores for each label?\\n\\n4. Experiments on code generation can be helpful, especially when the motivating example (Figure 1.) is on code.\", \"rating\": \"6\", \"confidence\": \"4\"}" ] }
lbFVTPv4s6
An Empirical Study on Prompt Compression for Large Language Models
[ "Zhang Zheng", "Jinyi Li", "Yihuai Lan", "Xiang Wang", "Hao Wang" ]
Prompt engineering enables Large Language Models (LLMs) to perform a variety of tasks. However, lengthy prompts significantly increase computational complexity and economic costs. To address this issue, we study six prompt compression methods for LLMs, aiming to reduce prompt length while maintaining LLM response quality. In this paper, we present a comprehensive analysis covering aspects such as generation performance, model hallucinations, efficacy in multimodal tasks, word omission analysis, and more. We evaluate these methods across 13 datasets, including news, scientific articles, commonsense QA, math QA, long-context QA, and VQA datasets. Our experiments reveal that prompt compression has a greater impact on LLM performance in long contexts compared to short ones. In the Longbench evaluation, moderate compression even enhances LLM performance.
[ "prompt compression", "explanation faithfulness", "feature attribution", "robustness" ]
Accept
https://openreview.net/pdf?id=lbFVTPv4s6
https://openreview.net/forum?id=lbFVTPv4s6
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "sveTWb1Rhw", "lbm25QTYML", "TiXryO9oBL", "JzQLzju6aU" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741056998180, 1740915844109, 1740753409891, 1740907603103 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission104/Reviewer_hEQn" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission104/Reviewer_6NvK" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission104/Reviewer_6DaB" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"This paper presents a comprehensive empirical study on six prompt compression methods applied to large (and multimodal) language models. The authors evaluate these techniques on a variety of tasks\\u2014including summarization, reconstruction, and question answering\\u2014across 13 datasets. They analyze not only traditional performance metrics (e.g., BLEU, ROUGE, BERTScore, accuracy) but also the impact on hallucination and response length, and provide insights into which words may be safely omitted.The paper is well-organized and exhaustive in its empirical evaluation, but several aspects warrant critical consideration.\", \"review\": \"Strengths\\n\\n1.Comprehensive Evaluation:\\n The study covers multiple prompt compression methods (including RL-based, LLM scoring-based, and LLM annotation-based techniques) and evaluates them across diverse datasets and tasks. \\n\\n2.Detailed Experimental Analysis:\\n The paper reports extensive results with multiple metrics (BLEU, ROUGE, BERTScore for summarization/reconstruction; accuracy and F1 for QA; and specialized hallucination metrics) and compares computational overhead. \\n\\n3.Insights on Hallucination and Word Omission:\\n The investigation into different types of hallucinations (Altered Semantic Hallucination vs. Information Loss Hallucination) and the analysis of word omission effects provide nuanced understanding beyond standard performance numbers.\\n\\nWeaknesses\\n\\n1.Limited Novelty:\\n While the paper offers an excellent empirical comparison, it largely compiles and evaluates existing techniques rather than proposing fundamentally new methods. The contribution is only incremental relative to the rapidly evolving literature on prompt engineering.\\n\\n2.Methodological Justification:\\n Some design choices and hyperparameter settings are not sufficiently justified. For example, while the compression ratio is set to 0.5 for certain methods, the rationale behind this choice is not deeply explored. A more in-depth discussion on parameter sensitivity could be beneficial.\\n\\n3.Multimodal Task Performance:\\n The extension of prompt compression techniques to multimodal settings (e.g., VQA tasks) is interesting, yet the results in this area appear less convincing. Further optimization or a more dedicated analysis might be required to strengthen these claims.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Review\", \"review\": [\"**Paper Summary**\", \"The paper presents an empirical study on prompt compression methods for language models. It examines six distinct methods using three popular LLMs across 13 datasets. The evaluation employs different metric (BLEU, ROUGE, BERTScore for summarization tasks, downstream task performance for QA tasks) along with an analysis of hallucination rates. The study also distinguishes between long and short context settings, showing that moderate compression can improve performance on long inputs. Overall, (Long)LLMLingua and LLMLingua-2 perform best at higher compression ratios, though all methods tend to increase hallucinations, primarily due to information loss.\", \"**Strengths**\", \"The study is comprehensive, evaluating six methods with three widely used models across a diverse set of tasks.\", \"The investigation of compression effects in both long and short contexts is well motivated, and the observation that moderate compression improves performance in long-context settings is noteworthy.\", \"**Weaknesses**\", \"The significance of the differences reported in Table 4 is unclear; an average change of 1 or 2 words relative to a total of 100 words may not be meaningful.\", \"Some metrics lack clarity. For example, the method for averaging performance in Figures 4 and 5 is not well explained. Additionally, the histogram in Figure 8 would be more informative if the y-axis represented a relative metric rather than absolute counts. It is also unclear what fraction of the total omitted words the top 10 words represent.\", \"The conclusion \\u201cRemoving the same word has a larger impact on performance in long-context tasks\\u201d (line 458.5) based on? This might be the case for some of the top 10 words omitted, but might not the case for other omitted words\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Review\", \"review\": \"### Summary\\nThis paper presents a comprehensive empirical study on prompt compression methods for LLMs. The authors evaluate six different prompt compression approaches across 13 datasets spanning various tasks including news, scientific articles, commonsense QA, math QA, long-context QA, and VQA datasets. The study goes beyond traditional performance metrics to analyze aspects such as the effect on response length, hallucination rates, effectiveness in multimodal contexts, and word omission patterns.\\n### Pros\\n1. The study performs comprehensive evaluation. It examines six compression methods across multiple dimensions and diverse datasets, providing a thorough understanding of the effectiveness of different approaches. It analyzes important aspects such as hallucination rates, response length impact, and generalizability to multimodal tasks.\\n2. The findings could have direct implications for practical applications of LLMs\\n### Cons\\nThis paper appears to be misaligned with the workshop's focus on building trust in LLMs. The paper primarily addresses prompt compression for efficiency purposes, rather than directly tackling issues of trustworthiness.\", \"rating\": \"6\", \"confidence\": \"3\"}" ] }
lTYxZGbLwA
LM Agents May Fail to Act on Their Own Risk Knowledge
[ "Yuzhi Tang", "Tianxiao Li", "Elizabeth Li", "Chris J. Maddison", "Honghua Dong", "Yangjun Ruan" ]
Language model (LM) agents have demonstrated significant potential for automating real-world tasks, yet they pose a diverse array of potential, severe risks in safety-critical scenarios. In this work, we identify a significant gap between LM agents' risk awareness and safety execution abilities: while they often answer "Yes'' to queries like $\texttt{"Is executing `sudo rm -rf /*' dangerous?"}$, they will likely fail to identify such risks in instantiated trajectories or even directly perform these risky actions when acting as agents. To systematically investigate this, we develop a comprehensive evaluation framework to examine agents' safety across three progressive dimensions: 1) their knowledge about potential risks, 2) their ability to identify corresponding risks in execution trajectories, and 3) their actual behaviors to avoid executing these risky actions. Our evaluation reveals two critical performance gaps that resemble the generator-validator gaps observed in LMs: while agents demonstrate near-perfect risk knowledge (>98% pass rates), they fail to apply this knowledge when identifying risks in actual scenarios, with performance dropping by >23%, and often still execute risky actions (<26% pass rates). This trend persists even in specialized reasoning models like DeepSeek-R1, reinforcing the challenge of translating an LM's risk knowledge into safe decision-making. We take advantage of these observed gaps to develop a risk verifier that independently critiques the proposed actions by agents, with an abstractor that converts specific execution trajectories into abstract descriptions where LMs can more effectively identify the risks. Our overall system achieves a significant reduction of risky action execution by 55.3% over vanilla-prompted agents.
[ "Large Language Models", "Language Model Agents", "AI Safety", "Evaluation" ]
Accept
https://openreview.net/pdf?id=lTYxZGbLwA
https://openreview.net/forum?id=lTYxZGbLwA
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "z3Ub3k7IDF", "YuRSaMwI8X", "DJ3cHIRLp9", "0fZYHhUjQv" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741079754804, 1740486176141, 1740843903153, 1740826730903 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission113/Reviewer_swpq" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission113/Reviewer_skKj" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission113/Reviewer_uWmd" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"comment\": \"The paper heavily relies on the ToolEmu environment, limiting the generalizability of its findings, and lacks diverse, real-world datasets to validate its proposed framework. Additionally, the effectiveness of the abstractor in transforming trajectories for QA evaluation is not clearly demonstrated, and the proposed safety defenses, while useful, are relatively straightforward and could benefit from further innovation.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Summary\", \"review\": \"### Summary of the Paper:\\nThe authors of this paper evaluated the safety behaviour of AI agents across three self-defined and created frameworks. These frameworks assess the agents' basic knowledge about safety, their ability to identify risks in working trajectories, and their final execution in dangerous situations. Moreover, they created a risk verification framework for analyzing potentially risky actions.\\n\\n### Strengths:\\nDividing the evaluation into three different benchmarks is reasonable. The paper clearly presents the evaluation methods and describes the assessment process well.\\nMoreover, the authors proposed a solution for mitigating the gaps between knowledge-identification and identification-execution stages.\\n\\n### Weaknesses:\\nThe initial 144 test cases and the final 328 trajectories might not adequately cover all potential dangerous situations and behaviors across AI agents.\\n\\n### Questions and Suggestions for Improvement:\\n- I didn\\u2019t notice which model you used for the safety and helpfulness evaluator. Did you provide any manual review of the obtained scores?\\n- Similarly, I didn\\u2019t see which model you used as the LLM-based risk context extractor (Section 3.2).\\n- Did you test the system on benign trajectories that share many similarities with unsafe ones? Do AI agents based on LLM models behave properly in these test cases?\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"The work is both timely and valuable by identifying a crucial knowledge-action risk gap in emerging agentic systems and proposing a baseline defense to mitigate this gap.\", \"review\": \"Pros:\\n1. By extending prior generator-validator concepts into scenarios involving malicious requests, this paper offers important insights by systematically categorizing LM agents\\u2019 failures into three progressive levels. This framework could potentially serve as an evaluation suite for future safer agentic system designs.\\n\\n2. It concludes with actionable guidance\\u2014such as incorporating ToolEmu tool-using trajectories\\u2014to enhance the safety alignment of LLMs.\\n\\n3. The presentation and writing style is straightforward and easy to follow.\\n\\n4. While the work builds heavily upon the ToolEmu paper, it provides valuable insights by broadening the scope to more realistic settings. It highlights how agents can be aware of a harmful action yet fail to identify it within a trajectory, which can potentially translate into malicious execution.\", \"cons\": \"1. More details about the effectiveness of the abstractor are needed. For the proposed knowledge test, trajectories are transformed into objective descriptions for QA evaluation. However, the degree of transformation accuracy for the abstractor are not clearly presented in the paper.\\n\\n2. Most experiments rely heavily on the ToolEmu environment; incorporating additional frameworks and more diverse, real-world datasets could improve the generalizability of the findings.\\n\\n3. The proposed defenses (1) safety prompts and (2) LLM-based critique, while effective in standalone LLM settings, are relatively straightforward and might warrant further innovation for broader or domain-specific applications.\\n\\nOverall, I would suggest accepting the paper since it identifies important findings on the knowledge-action gap in agents and proposes a simple yet effective baseline defense for mitigation.\", \"rating\": \"6\", \"confidence\": \"4\"}", "{\"title\": \"Interesting perspective on safety\", \"review\": [\"Strengths:\", \"This paper seems to present a novel setup focusing on software/terminal executions for agents.\", \"It is also interesting to distinguish between the identification and the execution ability.\"], \"weaknesses\": [\"I find the paper pretty difficult to understand, but must also clarify that I am an uninformed outsider.\"], \"rating\": \"6\", \"confidence\": \"1\"}" ] }
lNCwWLMCwO
SPEX: Scaling Feature Interaction Explanations for LLMs
[ "Justin Singh Kang", "Landon Butler", "Abhineet Agarwal", "Yigit Efe Erginbas", "Ramtin Pedarsani", "Bin Yu", "Kannan Ramchandran" ]
Large language models (LLMs) have revolutionized machine learning due to their ability to capture complex interactions between input features. Popular post-hoc explanation methods like SHAP provide *marginal* feature attributions, while their extensions to interaction importances only scale to small input lengths ($\approx 20$). We propose *Spectral Explainer* (SPEX, a model-agnostic interaction attribution algorithm that efficiently scales to large input lengths ($\approx 1000)$. SPEX exploits underlying natural sparsity among interactions—common in real-world data—and applies a sparse Fourier transform using a channel decoding algorithm to efficiently identify important interactions. We perform experiments across three difficult long-context datasets that require LLMs to utilize interactions between inputs to complete the task. For large inputs, SPEX outperforms marginal attribution methods by up to 20\% in terms of faithfully reconstructing LLM outputs. Further, SPEX successfully identifies key features and interactions that strongly influence model output. For one of our datasets, *HotpotQA*, SPEX provides interactions that align with human annotations. Finally, we use our model-agnostic approach to generate explanations to demonstrate abstract reasoning in closed-source LLMs (*GPT-4o mini*) and compositional reasoning in vision-language models.
[ "Interpretability", "Signal Processing", "Sparsity", "SHAP", "Interactions" ]
Accept
https://openreview.net/pdf?id=lNCwWLMCwO
https://openreview.net/forum?id=lNCwWLMCwO
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "pDQDe7G3KK", "NkhR1h4JCV", "2W4v6RXP7q", "0ZBEVOfdfM" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740893081072, 1740431906094, 1741109238977, 1740895862717 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission24/Reviewer_ZoAG" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission24/Reviewer_DtWF" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission24/Reviewer_iUop" ] ], "structured_content_str": [ "{\"title\": \"Efficiently scaling post-hoc explainability\", \"review\": [\"This paper demonstrates a post-hoc explainability algorithm for high interaction orders which scales to longer inputs. It treats the search for important interactions as a decoding problem from noisy communications. By assuming a sparse set of interactions and employing algebraic channel coding techniques, it achieves a computational complexity of O(sdn) compared to the \\u03a9(n^d) of existing methods.\", \"### Strengths\", \"The paper proposes an interesting approach for scaling post-hoc feature explainability to a greater number of input variables and a limited compute budget.\", \"Clear comparisons demonstrate the trade off between SPEX and prior approaches.\", \"The method is well detailed and easy to follow, including a thorough appendix.\", \"### Weaknesses\", \"It's not clear why post-hoc explainability methods should be used over methods which have white-box access to internals. If explainability is actually critical for the domain (e.g. medical diagnosis), truly causal explanations should be preferred over estimating interactions from inputs and outputs.\", \"Using [UNK] mask tokens to llama/4o is probably out of distribution, how faithful are post-hoc explainability methods in these cases?\", \"It would be useful to see a couple examples where interactions generated by SPEX are unfaithful.\"], \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"Strong Engineering, Could Use More Motivating\", \"review\": \"This paper is clearly a well-done engineering feat. Optimizing an algorithm to manage high-dimension interactions for large numbers of features is no easy feat, especially on a limited compute budget. This paper sets itself a rigorous technical challenge and solves it rigorously, proving that it can achieve higher faithfulness than existing baselines like SHAP and LIME. Moreover, it allows for multiple interaction terms to be applied to new settings, opening up interaction-based explainability methods to new applications.\\n\\nHowever, it is important to take a step back and consider why explainability is such a prominent field in the first place. Local explainability exists to elucidate the opaque decisions made by LLMs, oftentimes to assist decision-makers who must guarantee the safety and reasonability of their model results. To this end, there are dozens of proposed explainability methods, a competitive field that SPEX is attempting to join. To prove that it is a strong addition, the authors must explain how SPEX is a useful tool to help us understand LLM outputs. \\n\\nI believe there is more work to be done on this front. The introduction talks about domains where feature interaction could offer a salient, human-understandable heuristic, such as protein design and medical diagnosis. However, the examples that are explored in Figure 1 and Section 7 are not as compelling. Especially for textual tasks like in Section 7.1, it is unclear how easily a human can interpret third and fourth-order interactions between features. The visual question answering is a little more informative but still requires a large amount of inference by a human viewer on how these interactions are meaningful. It is important to remember that faithfulness is far from everything for explainability methods: we can always find some high-faithfulness proxy for a model when we make our proxy sufficiently complex, but we will likely sacrifice the human interpretability of our results. It is unclear how well this method retains human interpretability besides a few select examples, and it largely feels like the question of \\\"how does this make models more interpretable\\\" is pushed to the periphery instead of being a unifying theme for the paper. I respect that the ask for more motivation is an amorphous one, hard to pin down in experiments, but I do think it is an important part of any explainability paper.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"comment\": \"The paper makes a contribution to scalable interaction-based attribution, an important step toward understanding LLM decision-making. However, it lacks discussion on human interpretability. The main concern is the paper\\u2019s relevance to LLM interpretability. While it extends SHAP/LIME-style interpretability to 1,000 features, LLMs generate outputs based not only on input but also on world knowledge embedded in their weights. Attributing an answer solely to input overlooks the issue of hallucinations. This represents a traditional approach to model interpretability which has limited relevance to this workshop.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review\", \"review\": \"In this work, the authors propose a novel input feature attribution method that efficiently considers interactions between features leveraging the inherent sparsity of these interactions in Fourier space. They propose SPEX, an algorithm for finding the proper masking patterns in an efficient manner leveraging techniques from channel coding. Through various experiments the authors demonstrate that SPEX scales to large sequences where prior works fail or become computationally prohibitive while maintaining competitive if not better performance in terms of faithfulness, counterfactual ablation, and alignment with human explanations.\", \"weakness\": \"I assume language has generally evolved to be an efficient communication scheme and is thus relatively information dense. In your tasks (sentiment classification, etc.) it is reasonable to assume that only a sparse set of features/interactions are relevant to the output, but I could see this assumption breaking down in more complex settings and tasks such as open-ended generation.\", \"rating\": \"7\", \"confidence\": \"3\"}" ] }
lEE9JpIj8t
Emotional Manipulation is All You Need: A Framework for Evaluating Healthcare Misinformation in LLMs
[]
Warning: This paper discusses potentially harmful healthcare misinformation patterns and LLM vulnerabilities The integration of Large Language Models (LLMs) into healthcare applications has raised critical concerns about their susceptibility to generating harmful medical misinformation, particularly when faced with emotionally manipulated prompt injection attacks. Through systematic evaluation of 112 attack scenarios in eight state-of-the-art LLMs, we reveal that emotional manipulation coupled with prompt injection can increase the generation of dangerous medical misinformation without warning from a baseline of 6.2% to 37.5%. We also found that emotional content not only amplifies attack success rates, but also leads to more severe forms of misinformation. Notably, models vary widely in their susceptibility - while some models like Claude 3.5 Sonnet demonstrate strong resistance across all attack types, others show high vulnerability to emotional manipulation. These findings not only underscore critical vulnerabilities in LLM safety filters, but also emphasize the urgent need for enhanced protection against emotionally manipulated prompt injection attacks. Given that 39\% of the US population already believe in alternative cancer treatments, our research highlights the life-threatening implications of AI-generated health misinformation and provides crucial insights for developing more robust safety mechanisms before deployment in clinical settings.
[ "Medical AI safety", "Prompt Injection Attacks", "Emotional manipulation in LLMs", "LLM jailbreak vulnerabilities", "Healthcare-specific adversarial attacks", "Trust and safety in medical LLMs", "LLM jailbreak vulnerabilities" ]
Reject
https://openreview.net/pdf?id=lEE9JpIj8t
https://openreview.net/forum?id=lEE9JpIj8t
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "Jvhog1bM73", "Bqdzg0vznB", "3VYM9jbGfD" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1740843241169, 1741103599251, 1740895006532 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission60/Reviewer_faXi" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission60/Reviewer_yS4z" ] ], "structured_content_str": [ "{\"title\": \"Official Review of Submission60\", \"review\": \"# Summary\\nThe paper investigates vulnerabilities in healthcare LLMs by examining prompt injection attacks\\u2014particularly those that leverage emotional manipulation\\u2014to assess their potential to amplify harmful medical misinformation. It evaluates 8 state-of-the-art LLMs across 112 attack scenarios.\\n\\n# Strengths\\nThe paper combines 6 prompt injection techniques with both emotional and non-emotional variants, offering a clear view of model performance. It provides categorization for both response severity and cancer treatment misinformation.\\n\\n# Questions & Weakness:\\n1. Could you please clarify whether the response severity and cancer treatment misinformation categorizations are performed by LLMs acting as judges or by human expert judges? This detail is not clearly mentioned in the Methods section.\\n\\n2. With 112 test scenarios (derived from 6 attacks + 1 baseline, times 8 LLMs and 2 variants), is only one prompt used per test scenario? This limited scale might impact the reliability of the conclusions.\\n\\n3. It appears that both the attack techniques and the categorizations are based on previous work. Could you explain the source of the medical prompts and clarify any novel contributions of this paper?\\n\\n4. The abstract mentions crucial insights for developing robust safety mechanisms before clinical deployment. However, these insights are not clearly detailed in the manuscript.\\n\\n5. If the goal is to identify AI safety risks in healthcare, a more effective approach might be to categorize risky inputs within healthcare scenarios (attacking methods and emotional manipulation are more like perturbations). The current benchmark's usage in this context is unclear.\", \"rating\": \"4\", \"confidence\": \"3\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"The paper lacks clarity on its methodology, threat model, dataset, and evaluation of helpfulness, and does not align with its title or focus on healthcare-specific LLMs\", \"review\": \"In the paper, the authors claim to conduct a systematic evaluation of healthcare misinformation in LLMs. This is an interesting research avenue and requires collaboration within the research community to develop more robust LLMs for the healthcare domain.\\n\\nHowever, I believe the paper has major flaws. The paper's title does not accurately reflect the content of the paper. Additionally, I found the attack threat model to be unhelpful\\u2014why would anyone use jailbreak prompts when seeking medical advice from LLMs? The threat model could have been discussed more thoroughly.\\n\\nAnother major issue is the lack of clarity regarding the dataset used for the results. It is unclear whether the results were based solely on the examples provided in the Appendix with different prompts, or if other datasets or prompts were used, which is not specified.\\n\\nIf the paper aims to focus on the safety of healthcare LLMs, it could have been more impactful if it had tested LLMs specifically trained for healthcare tasks, such as Med-based LLMs.\\n\\nAlso, the paper fails to mention how it measures whether the generated responses are helpful or harmful. The categorization in Table 1 alone is not sufficient to evaluate the helpfulness or harmfulness of the medical advice.\", \"rating\": \"3\", \"confidence\": \"3\"}" ] }
kuBxDdpJX1
ExpProof : Operationalizing Explanations for Confidential Models with ZKPs
[ "Chhavi Yadav", "Evan Laufer", "Dan Boneh", "Kamalika Chaudhuri" ]
In principle, explanations are intended as a way to increase trust in machine learn- ing models and are often obligated by regulations. However, many circumstances where these are demanded are adversarial in nature, meaning the involved parties have misaligned interests and are incentivized to manipulate explanations for their purpose. As a result, explainability methods fail to be operational in such settings despite the demand Bordt et al. (2022). In this paper, we take a step towards op- erationalizing explanations in adversarial scenarios with Zero-Knowledge Proofs (ZKPs), a cryptographic primitive. Specifically we explore ZKP-amenable ver- sions of the popular explainability algorithm LIME and evaluate their performance on Neural Networks and Random Forests. Our code is publicly available at : https://github.com/infinite-pursuits/ExpProof.
[ "Explanation", "Trust", "LIME", "Zero-Knowledge Proofs", "Auditing" ]
Accept
https://openreview.net/pdf?id=kuBxDdpJX1
https://openreview.net/forum?id=kuBxDdpJX1
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "ta1rTREIXP", "aL8JeCz58s", "KxrPNW0I04", "01oBYwFG6d" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740895944381, 1741084616426, 1740871531600, 1740701883948 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission50/Reviewer_PfwV" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission50/Reviewer_KL8S" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission50/Reviewer_97sS" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"review\": \"This work proposes a framework for explanations that can be verified by end users (as coming from the same model that made a prediction) using concepts from cryptography such as zero-knowledge proofs. The main idea is to create a protocol such that users can verify the provider is providing the right model and the right explanation for that model. In this work, the authors focus on LIME and provide methods to create zero-knowledge proof circuits for the entire explanation algorithm.\\n\\nOverall, this paper raises interesting and novel points about the real-world implementation of explanations in black box or private systems. Additionally, the authors actually create a fully zero-knowledge-provable explanation system, but I will note this appears to be mainly correctly connecting and preparing inputs to existing ZKP libraries. However, I am confused by the introduction of BorderLIME as it seems to perform the same if not worse as regular LIME in the experiments. Why did the authors decide to develop this algorithm in addition to the ZKP version of LIME?\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of ExpProof\", \"review\": \"This paper presents ExpProof, a novel framework that leverages cryptographic commitments and zero-knowledge proofs (ZKPs) to guarantee that local explanations are computed faithfully using a fixed, confidential model. The approach is motivated by adversarial scenarios where model owners might manipulate explanations. They also introduce new LIME variants to decrease the overhead introduced by using ZKPs.\\n\\nThe combination of cryptographic techniques (commitments and ZKPs) with local explanation methods is novel and addresses a crucial challenge in verifying explanation correctness without revealing sensitive model details. The paper offers a comprehensive description of the ExpProof protocol, including its adaptations (like BorderLIME) and a thorough evaluation on neural networks and random forests. The experimental results\\u2014despite being on relatively small-scale models\\u2014demonstrate feasible proof generation and verification times.\\n\\nWhile the paper serves as a strong proof of concept, it remains restricted to small neural networks and random forests, raising serious questions about scalability. It is uncertain how well the framework would handle more complex architectures or large language models, which are a primary focus of this workshop. A stronger evaluation setup and more extensive discussions on how the framework could be adapted to large language models would have made the work even more valuable.\", \"rating\": \"5\", \"confidence\": \"3\"}", "{\"title\": \"This paper presents ExpProof, a framework leveraging Zero-Knowledge Proofs (ZKPs) to enforce verifiable explanations for confidential machine learning models, with a focus on LIME-based explanations.\", \"review\": \"The paper tackles a critical problem in AI explainability: ensuring verifiable explanations in adversarial settings while preserving model confidentiality which is indeed an important topic. The authors introduce ZKP-amenable variants of LIME, optimizing computational efficiency via commitment schemes and proof verification techniques. The theoretical foundations are solid so good job by the authors, and the security guarantees are well-argued. Experimental evaluations on neural networks and random forests demonstrate feasibility, with proof generation times within practical bounds (~1.5 min) and verification in ~0.12s. However, the study is limited to LIME, leaving open questions on generalizability to other explainability methods (e.g., SHAP, IG). Additionally, BorderLIME exhibits high proof generation overhead (~4.85 min), raising concerns about scalability. A good research and a great research is different in terms of the generalisability of the solution of technique proposed. Further optimizations and extensions to broader explanation techniques would strengthen the work.\\n\\nStrengths\\n1. Novel integration of ZKPs for verifiable explanations\\n2. Rigorous security analysis with formal proofs\\n3. Empirical validation on real datasets and models\\n4. Efficient proof verification (~0.12s), practical feasibility\\n\\nWeaknesses\\n1. Computational overhead (~1.5 min for proof generation, ~4.85 min for BorderLIME)\\n2. Limited to LIME; unclear generalisation to other methods\\n3. No real-world deployment or industry case studies\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
khh9YSBCf4
Learning Automata from Demonstrations, Examples, and Natural Language
[ "Marcell Vazquez-Chanlatte", "Karim Elmaaroufi", "Stefan Witwicki", "Matei Zaharia", "Sanjit A. Seshia" ]
Expert demonstrations have proven an easy way to indirectly specify complex tasks. Recent algorithms even support extracting unambiguous formal specifications, e.g. deterministic finite automata (DFA), from demonstrations. Unfortunately, these techniques are generally not sample-efficient. In this work, we introduce $L^\star LM$, an algorithm for learning DFAs from both demonstrations \emph{and} natural language. Due to the expressivity of natural language, we observe a significant improvement in the data efficiency of learning DFAs from expert demonstrations. Technically, $L^\star LM$ leverages large language models to answer membership queries about the underlying task. This is then combined with recent techniques for transforming learning from demonstrations into a sequence of labeled example learning problems. In our experiments, we observe the two modalities complement each other, yielding a powerful few-shot learner.
[ "Neurosymbolic Artificial Intelligence", "Multimodal Learning", "Learning Automata", "Learning from Demonstrations", "Deterministic Finite Automata (DFA)", "Large Language Models (LLMs)", "Formal Task Specification" ]
Accept
https://openreview.net/pdf?id=khh9YSBCf4
https://openreview.net/forum?id=khh9YSBCf4
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "S3wiOFFNLt", "LJq5XIuhGy", "5oFyL67Aid" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740855887040, 1740884701917, 1741053767167 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission15/Reviewer_Zuoz" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission15/Reviewer_pKyT" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"review\": \"This paper introduces L*LM, a novel algorithm for learning Deterministic Finite Automata (DFAs) from multiple modalities: natural language descriptions, labeled examples, and expert demonstrations. The authors integrate large language models (LLMs) with classic automata learning techniques and Demonstration Informed Specification Search (DISS) to create a multimodal approach to learning formal task specifications.\\n\\n# Pros\\n- builds on well-established frameworks in automata learning, good theoretical foundation\\n- proposes a practical solution (allowing \\\"unsure\\\" responses) to LLM hallucinations in inducing automata\\n- uses LLMs in a relatively restricted way (answering yes/no/unsure to membership queries) to build a complex grammar\\n\\n# Cons\\n- although the paper tests on multiple domains, they are still relatively simple. would be interesting to see if this would work on something slightly more complex\\n\\n# Originality, Clarity, Significance\\nThis is plugging together existing work like DISS, but I think the idea of using LLMs through binary membership queries is a neat one. It is more principled than directly asking the LLM to infer a grammar. I like it, I think the design is original.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"Learning DFA from natural languages and demonstrations\", \"review\": \"The paper proposes a method to learn automata from demonstrations and natural descriptions. They integrate LLMs into either the L* or SAT methods, where the learner actively asks queries and the LLM serves as an oracle, responding to queries based on natural language descriptions with \\\"Yes,\\\" \\\"No,\\\" or \\\"Unsure.\\\" To further address hallucination and incomplete description issues, they incorporate the DISS algorithm, which leverages demonstration examples to determine the DFA. Experimental results on 2D workspace problems show that this approach is effective.\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}" ] }
kFpbxWy1p8
A Missing Testbed for LLM Pre-Training Membership Inference Attacks
[ "Mingjian Jiang", "Ken Ziyu Liu", "Sanmi Koyejo" ]
We introduce a simple and rigorous testbed for membership inference attacks (MIA) against pre-training sequences for large language models (LLMs). Our testbed addresses the following gaps in existing evaluations, which lack: (1) \textit{uniform} sampling of member/non-member documents of varying lengths from pre-training shards; (2) large-scale \textit{deduplication} at varying strengths, both within and across the sampled members/non-members; and (3) rigorous \textit{statistical tests} to detect member/non-member distribution shifts that cause faulty evaluations and are otherwise imperceptible to the heuristic techniques used in prior work. We provide both global- and domain-level datasets (e.g., Reddit, Stack Exchange, Wikipedia), derived from fully-open pre-trained LLM/dataset pairs including Pythia/Pile, Olmo/Dolma, and our custom pre-trained GPT-2-Large on FineWeb-Edu. We additionally open source a modular and extensible codebase that facilitates the creation of custom, statistically validated, and deduplicated evaluation data using future open models and datasets. In sum, our work is a concrete step towards addressing the evaluation issues discussed by prior work.
[ "Large Language Models", "Foundation Models", "Membership Inference Attacks", "Dataset", "Privacy", "Benchmark", "Trustworthy AI" ]
Accept
https://openreview.net/pdf?id=kFpbxWy1p8
https://openreview.net/forum?id=kFpbxWy1p8
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "syBLKMVXWS", "pdMRzRc4Eo", "eIUSljPgvZ", "aFIgsoVaJI" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741056752739, 1740751371623, 1740663541708, 1740433346062 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission94/Reviewer_bbRN" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission94/Reviewer_wQaP" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission94/Reviewer_x5LU" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Testbed and dataset for Membership Inference Attacks\", \"review\": \"# Review\\n\\nThe authors introduce a testbed and dataset for Membership Inference Attacks (MIA). They provide both global- and domain-level datasets through uniform sampling based on fully open model/dataset pairs and open-source a modular codebase.\\n\\n## Strengths\\n\\n1. High-quality data cleaning methods: The authors extract textual features and conduct Kolmogorov-Smirnov (KS) tests. Additionally, they implement blind baselines to detect significant distribution shifts between members and non-members.\\n2. Open-source datasets: The authors provide global- and domain-level datasets derived from fully open model/dataset pairs.\\n3. Extensibility: The same approach can be applied to sensitive industry data processing, which is beneficial for data security.\\n\\n## Weaknesses\\nThe effects of different dataset constructions are not discussed. Additionally, the sensitivity of MIA may vary across different model architectures.\", \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"Important direction, but structure of the paper should be improved (or submitted to a long-track)\", \"review\": \"The authors propose a testbed for membership inference attacks that addresses some gaps in the current evaluations.\", \"strengths\": [\"The introduced testbed is claimed to possess three important properties that address previous gaps in the literature, such as true uniform sampling of member/non-member sequences.\", \"This paper gives important insights on how to correctly evaluate MIAs.\"], \"weaknesses\": [\"Structure: the paper reads like a full paper, where important sections were moved to the appendix due to 4 pages limitation (including Conclusion, Related work, some results and analysis, etc). This really should have been submitted to the Long-papers track.\", \"I am not familiar with MIAs, and this is hard for me to judge the technical solidness of the paper. Furthermore, as a reader outside of the field, it was really hard for me to follow the paper, since lots of important pieces were hidden in the appendix (like missing the context of the work, which is described in the appendix). Due to the issues with the paper structure, I\\u2019d recommend rejection, however, I am not certain in my evaluation as non-expert.\"], \"rating\": \"3\", \"confidence\": \"2\"}", "{\"title\": \"Review of \\\"A Missing Testbed for LLM Pre-Training Membership Inference Attacks\\\"\", \"review\": \"### Strength\\nThis paper presents a standardized testbed for evaluating membership inference attacks (MIAs) on LLM pre-training data, addressing key issues in data bias, distribution shifts, and unreliable evaluation methods. It introduces true uniform sampling, large-scale deduplication, and Kolmogorov-Smirnov statistical tests to improve dataset quality and prevent misleading MIA results. The work is highly reproducible, releasing open-source datasets, models, and evaluation tools, making it a valuable contribution to MIA research.\\n\\n### Weakness\\nHowever, the paper lacks real-world attack case studies, focusing mainly on dataset validation without directly evaluating state-of-the-art MIAs on the proposed testbed. Additionally, computational cost and scalability are not fully addressed, raising concerns about its feasibility for large-scale models. The study also relies heavily on statistical tests without deep analysis of MIA performance variations under different dataset conditions.\", \"rating\": \"7\", \"confidence\": \"3\"}" ] }
iilhN2MycO
MALIBU Benchmark: Multi-Agent LLM Implicit Bias Uncovered
[ "Ishwara Vasista", "Imran Mirza", "Cole Huang", "Rohan Rajasekhara Patil", "Aslihan Akalin", "Kevin Zhu", "Sean O'Brien" ]
Multi-agent systems, which consist of multiple AI models interacting within a shared environment, are increasingly used for persona-based interactions. However, if not carefully designed, these systems can reinforce implicit biases in large language models (LLMs), raising concerns about fairness and equitable representation. We present MALIBU\footnote{You can find the MALIBU Benchmark here: \url{https://anonymous.4open.science/r/MALIBU-Benchmark-228C}}, a novel benchmark developed to assess the degree to which LLM-based multi-agent systems implicitly reinforce social biases and stereotypes. MALIBU evaluates bias in LLM-based multi-agent systems through scenario-based assessments. AI models complete tasks within predefined contexts, and their responses undergo evaluation by an LLM-based multi-agent judging system in two phases. In the first phase, judges score responses labeled with specific demographic personas (e.g., gender, race, religion) across four metrics. In the second phase, judges compare paired responses assigned to different personas, scoring them and selecting the superior response. Our study quantifies biases in LLM-generated outputs, revealing that bias mitigation may favor marginalized personas over true neutrality, emphasizing the need for nuanced detection, balanced fairness strategies, and transparent evaluation benchmarks in multi-agent systems.
[ "LLM", "Implicit Bias", "Multi-agent", "AI Alignment", "Persona" ]
Accept
https://openreview.net/pdf?id=iilhN2MycO
https://openreview.net/forum?id=iilhN2MycO
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "nx6aJgg5hP", "bqrxITfCbF", "MWBeyKecyH", "F4Zszp1iNI" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740851272722, 1741056623175, 1740925672381, 1740947864925 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission93/Reviewer_EbcJ" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission93/Reviewer_c9VS" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission93/Reviewer_Cb1L" ] ], "structured_content_str": [ "{\"title\": \"MALIBU is a new benchmark that systematically evaluates biases in LLM-based multi-agent systems via scenario-based tasks and a two-phase, LLM-based judging process. By comparing responses labeled with various demographic personas, MALIBU reveals how bias mitigation strategies may inadvertently favor marginalized personas rather than achieving true neutrality.\", \"review\": \"Strengths:\\n\\nThe paper discusses a timely and important topic. They consider multi-agent systems, specifically persona-based interactions which amplify these biases and reinforce harmful stereotypes. I find it interesting that they consider multi-agent systems where as most studies (for eg. Rainbow teaming) are looking for biases at the model level. They explore methods for measuring implicit biases in LLM-based multi-agent systems and present a benchmark to identify and reduce these biases. They show interesting examples of where existing bias correction techniques can result in favoring the marginalized personas. For eg, they show that GPT-4o mini and Deepseek-v3 show that Female persons outperform males across all measured traits (creativity, efficiency, accuracy and reliability). It could also be interesting to see score differences before bias corrections are made to a model. So for eg, seeing score differences between base llama models and llama-instruct models.\", \"weaknesses\": \"Although the paper's findings are quite intriguing, I'm not entirely clear on how the methodology was executed. The following are questions I have about their methodology: \\n(1) How were the base scenarios generated? How were they modified in an iterative process? It would be helpful to see a few examples in the appendix. \\n(2) Why were two responses produced for two scenario?\\n(3) What details are included in the resulting benchmark? \\n\\nIt would be helpful to see a Figure 1 where they illustrate the details of phase 1 and phase 2 and also provide examples of what is contained in the benchmark.\", \"rating\": \"5\", \"confidence\": \"4\"}", "{\"title\": \"Review of MALIBU Benchmark\", \"review\": \"Summary: This paper proposes a benchmark to determine implicit bias in LLM-based multi-agent systems by evaluating how they score responses \\\"written by people\\\" with different socio-demographic attributes.\", \"strengths\": [\"interesting idea that lead to interesting findings about implicit biases in LLM models\"], \"weaknesses\": [\"not clear how this is specifically relevant to multi-agent systems; doesn't this apply to LLM models more generally?\", \"structure of the methodology section could be a bit clearer. Maybe state at the beginning how your experiment is set up and then go into the details.\", \"there is not enough emphasis on MALIBU. There is a strong investigation of Implicit Bias Measurement, but after reading the paper, I did not really think about or remember MALIBU a lot.\"], \"rating\": \"5\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"MALIBU benchmark on implicit biases of multi-agent systems\", \"review\": \"This paper introduces MALIBU, a benchmark measuring implicit social biases and stereotypes expressed by LLM-based multi-agent systems. The systems engage in demographic persona-based (e.g. gender, race, religion) scenarios (single-response assessments and minimal contrastive pair comparisons) and the creativity, accuracy, efficiency and reliability of the generated text is evaluated. MALIBU is run on GPT-4o mini and DeepSeek-v3 and finds significant implicit biases across demographic categories.\", \"strengths\": [\"Clarity: paper is well-structured and has a clear contribution and narrative\", \"Originality/Significance: MALIBU examines implicit biases in multi-agent systems with is both a new (not many benchmarks with focus on the intersection of biases and multi-agent systems) and useful (increasing prevalence of multi-agent systems) contribution\"], \"weaknesses\": [\"introduce more of the related work on known biases of LLMs, like Mazeika et al, 2025 (Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs)\", \"be more explicit about the total number of scenarios\", \"a more extensive (more scenarios, more demographic groups) benchmark and evaluation (more models, more in depth comparison of reasoning) would strengthen the paper.\"], \"rating\": \"6\", \"confidence\": \"4\"}" ] }
iT0oH3DfAV
Maybe I Should Not Answer That, but... Do LLMs Understand The Safety of Their Inputs?
[ "Maciej Chrabaszcz", "Filip Szatkowski", "Bartosz Wójcik", "Jan Dubiński", "Tomasz Trzcinski" ]
Ensuring the safety of the Large Language Model (LLM) is critical, but currently used methods in most cases sacrifice the model performance to obtain increased safety or perform poorly on data outside of their adaptation distribution. We investigate existing methods for such generalization and find them insufficient. Surprisingly, while even plain LLMs recognize unsafe prompts, they may still generate unsafe responses. To avoid performance degradation and preserve safe performance, we advocate for a two-step framework, where we first identify unsafe prompts via a lightweight classifier, and apply a "safe" model only to such prompts. In particular, we explore the design of the safety detector in more detail, investigating the use of different classifier architectures and prompting techniques. Interestingly, we find that the final hidden state for the last token is enough to provide robust performance, minimizing false positives on benign data while performing well on malicious prompt detection. Additionally, we show that classifiers trained on the representations from different model layers perform comparably on the latest model layers, indicating that safety representation is present in the LLMs' hidden states at most model stages. Our work is a step towards efficient, representation-based safety mechanisms for LLMs.
[ "Large Language Models", "Unsafe Detection", "Inner Representations", "Safety" ]
Accept
https://openreview.net/pdf?id=iT0oH3DfAV
https://openreview.net/forum?id=iT0oH3DfAV
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "honF40iqye", "cH0mlws45M", "MzoFiNTztO" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1740900493562, 1741057118386, 1740982155810 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission114/Reviewer_V6vM" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission114/Reviewer_r4VS" ] ], "structured_content_str": [ "{\"title\": \"Review for Paper About Whether LLMs Understand the Safety of their Inputs\", \"review\": \"# Summary\\nThis paper addresses the urgent need for safer Large Language Models (LLMs) without sacrificing their performance. Current methods either sacrifice the quality of the generated text or do not generalize to new inputs. The authors propose a two-stage solution: a small classifier first identifies potentially unsafe prompts, and then a \\\"safe\\\" model is only applied to those prompts. This approach aims to preserve the high-quality outputs of the original LLM on benign prompts while being safe on malicious ones.\\n\\nThe paper focuses on developing an effective safety classifier, experimenting with different architectures and prompting techniques. Surprisingly, a simple classifier using the final hidden state of the last token is found to be effective at detecting unsafe prompts. Further, the paper discovers that safety-related information is present in the hidden states of LLMs at various layers, with implications for potential improvements in classification efficiency. The findings demonstrate the feasibility of enhancing LLM safety without degrading performance and offer intriguing observations on the internal representation of safety information by LLMs.\\n\\n# Pros and Cons\\n## Pros\\n1. **Results:** The results on WJ and MMLU are interesting, showing that the proposed approach is effective in reducing the targeted safety risks.\\n2. **Layer-Wise Analysis of Models for Safety Assessment:** The layer-wise analysis of models for safety assessment is a good approach to understand the safety of the models, and seems to be newly introduced in this paper.\\n3. **Quality and Writing:** The paper overall seems to be well-written and fairly easy to read. Addition of more details in the Appendix section could be beneficial. \\n\\n# Cons\\n1. **Novelty:** The novelty seems limited, with the previous work having used prompting-based approaches evaluate whether the input prompts are safe or not. The concept of having separate \\\"safe\\\" and \\\"unsafe\\\" models seems to be unique to this paper, however.\\n2. **Limited Set of Cited Papers:** There seems to be a limited set of cited papers overall. Some relevant papers seem to be missing, and the authors are encouraged to provide a more comprehensive literature review if space permits. The following paper seems to be relevant but on the topic of unlearning: \\\"Thaker, Pratiksha, et al. \\\"Guardrail baselines for unlearning in llms.\\\" arXiv preprint arXiv:2403.03329 (2024).\\\"\\n\\n# Quality and Clarity\\nThe paper overall seems to be of good quality and the concepts are well-explained in spite of the page limit. The authors could extend the Appendix section to provide more details on the experimental setup as well as for a more comprehensive literature review. A clearer presence of the potential future directions could also be beneficial.\\n\\n# Minor/Major Errors\\nNo major errors were found in the paper. The titles for some of the sections seem to be too long, esp. for section 4.2, even though this is a very minor (potential) issue.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of \\\"Do LLMs Understand The Safety of Their Inputs\\\"\", \"review\": \"Summary:\\nThis paper addresses the issue of performance degradation of LLMs that have been fine-tuned for safety by exploring classifiers that first predict whether a prompt is safe or unsafe, which is then followed by an appropriate LLM selection (i.e., the fine-tuned for safety model is only chosen when a prompt has been classified as unsafe).\", \"strengths\": [\"the paper introduces a seemingly effective and easy-to-implement solution to the performance-degradation problem\", \"the writing in most parts of the paper is clear\"], \"weaknesses\": [\"only addresses this problem using the Mistral model; does the safe-unsafe classification work just as well for other models?\", \"no examples or definitions of safe/unsafe prompts. Are the used datasets enough?\", \"the writing in section 3 is a bit difficult to follow. The purpose of the different training datasets only becomes clear after several read-throughs.\", \"Overall, while this paper introduces an interesting idea, it could benefit from a more rigorous approach of evaluating the feasibility of this idea. At its current state, it feels more like a paper on the Mistral model, rather than for LLMs, more generally.\"], \"rating\": \"6\", \"confidence\": \"3\"}" ] }
iSejnrMsVd
Understanding the Relationship between Prompts and Response Uncertainty in Large Language Models
[]
Large language models (LLMs) are widely used in decision-making, but their reliability, especially in critical tasks like healthcare, is not well-established. Therefore, understanding how LLMs reason and make decisions is crucial for their safe deployment. This paper investigates how the uncertainty of responses generated by LLMs relates to the information provided in the input prompt. Leveraging the insight that LLMs learn to infer latent concepts during pretraining, we propose a prompt-response concept model that explains how LLMs generate responses and helps understand the relationship between prompts and response uncertainty. We show that the uncertainty decreases as the prompt's informativeness increases, similar to epistemic uncertainty. Our detailed experimental results on real-world datasets validate our proposed model.
[ "Large language models", "Uncertainty Quantification", "Explanability", "Model Response Uncertainty Quantification", "Prompt Informativeness" ]
Reject
https://openreview.net/pdf?id=iSejnrMsVd
https://openreview.net/forum?id=iSejnrMsVd
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "xth4mQdLWb", "w9qVyz6eKI", "7y0p4nWqBa", "0XcicyEw1V" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740934685121, 1740951452085, 1741078719084, 1740908082191 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission64/Reviewer_oy7C" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission64/Reviewer_1dhC" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission64/Reviewer_4g5Q" ] ], "structured_content_str": [ "{\"title\": \"Interesting experiments but unsure what the prompt/response/concept model adds\", \"review\": \"The paper studies the relationship between prompt quality and response accuracy in the settings of medical multiple choice question answering and medical recommendations in an app. The paper's main empirical findings can be boiled down to: a) when prompts are more corrupted, response accuracy decreases and entropy increases and b) when irrelevant concepts are introduced in the prompt, the same things occurs.\", \"pros\": [\"The paper tackles an interesting domain where accuracy are highly desired from these models.\"], \"cons\": [\"It is not clear why the prompt-concept model is needed or what novel insights it provides. Before reading the paper, I would have expected that more corrupted prompts and prompts with more irrelevant concepts lead to worse answers by a model and that prompts. I do not see any evidence in the paper that the model developed fits how LLMs provides any novel insight into how LLMs operate.\"], \"rating\": \"4\", \"confidence\": \"2\"}", "{\"title\": \"Good paper on studying the relationship between prompt informativeness and response uncertainty.\", \"review\": \"This paper studies how the informativeness of input prompts affects response uncertainty in large language models (LLMs). The authors propose a Prompt-Response Concept (PRC) model to explain how LLMs generate responses and show that increasing prompt informativeness reduces uncertainty, akin to epistemic uncertainty in machine learning. Through theoretical analysis and experiments\\u2014 including a mobile health intervention simulation\\u2014they demonstrate that more informative prompts lead to more consistent and reliable responses.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Reject\", \"comment\": \"The paper's main weakness is that the proposed PRC model does not provide clear novel insights beyond the intuitive expectation that corrupted or irrelevant prompts lead to worse LLM responses. Additionally, the theoretical results depend heavily on the PRC model's validity, and the experimental design lacks sufficient detail in the main text, requiring readers to consult the appendix for crucial context.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Official Review of Submission64 by Reviewer 4g5Q\", \"review\": \"Summary:\\n\\nThis paper presents a framework to relate inputs and outputs of large language models (LLMs), named the prompt-response concept (PRC) model. The PRC model uses concepts as a unit of measurement for language and treats LLMs as functions mapping prompts to responses. Assuming a language model is well-trained, the PRC model has been proven to indicate that entropy in responses will decrease as the prompt includes more concepts. Experiments are done on datasets and simulations to demonstrate that the language models\\u2019 performance at a given task can improve when more relevant concepts are included in its prompt, and its uncertainty decreases as more relevant concepts are included in the prompt.\", \"pros\": [\"The formalization of prompt information can be useful for work with LLMs in general.\", \"The PRC model is an interesting and novel way of viewing how language models function.\", \"The experiments consider many factors and test on standard MCQ datasets and a simulation, showing the effects of different prompts in different contexts.\"], \"cons\": [\"There is minimal description of the design of the experiments done in the main body of the paper. It seems obligatory (especially for 4.2 and 4.3) to consult the appendix for context on the experiments being analyzed.\", \"The theoretical results derived in the paper rely on the PRC model that this paper proposes. Thus, the quality of the PRC model can greatly affect the significance of the theoretical results.\"], \"rating\": \"7\", \"confidence\": \"3\"}" ] }
iQOGhe3Cj9
Budget-Constrained Learning to Defer for Autoregressive Models
[]
The learning to defer (L2D) framework gives a model the choice to defer prediction to an expert based on the model's uncertainty. We assume an L2D setting for sequence outputs where a small model can defer specific outputs of the whole model prediction to a large model in effort to interweave both models throughout the prediction. We propose a Learn then test approach to tune a token-level confidence-based thresholding rejector for pre-trained predictors with statistical guarantees of being within a user-defined budget and maximizing accuracy. We use Bayesian optimization to efficiently search the space of thresholds. In the experiments, we also empirically demonstrate that this method can achieve budget control while maintaining prediction quality of prediction system in text summarization.
[ "learning-to-defer", "risk control" ]
Reject
https://openreview.net/pdf?id=iQOGhe3Cj9
https://openreview.net/forum?id=iQOGhe3Cj9
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "fesAASdm7R", "IvVxBeyINU", "3TlMemiUZ9" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740687278109, 1740890001658, 1741083666791 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission37/Reviewer_GGrW" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission37/Reviewer_mDP2" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Budget-Constrained Learning to Defer for Autoregressive Models\", \"review\": \"The paper presents a well-structured methodology, systematically dividing the approach into Hyperparameter Optimization (HPO) and the Learn Then Test (LTT) framework, making it easy to follow. The use of Gaussian Processes and Hypervolume Indicators (HVI) for threshold selection is an effective strategy for efficiently navigating the optimization space. However, the results lack sufficient explanation, which diminishes the novelty and impact of the findings. While the study emphasizes cost efficiency, there is little discussion on the actual computational cost compared to grid search or simpler heuristic approaches, making it unclear how to benchmark the efficiency of this method. Additionally, deferral decisions are based on softmax entropy, yet softmax probabilities are known to be overconfident, and the paper does not discuss how bias in the smaller model might affect deferral performance. Another critical consideration is the curse of dimensionality in Bayesian Optimization, which is briefly mentioned in the discussion. The study sets a maximum sequence length of 20 tokens, but real-world applications often require handling significantly longer sequences. As sequence length increases, the dimensionality of the BO problem grows, making it more difficult for Gaussian Processes (GPs) to generalize effectively. A Survey on High-Dimensional Gaussian Process Modeling with Application to Bayesian Optimization (Binois & Wycoff) might help, this paper highlights that GPs struggle in high-dimensional spaces due to distance concentration effects and the exponential increase in complexity. Addressing scalability concerns and potential mitigation strategies, such as dimensionality reduction techniques or structured optimization, would significantly enhance the study\\u2019s practical applicability.\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"title\": \"Mathematically sound and Interesting Methodology, Lacking Relevance to Trust\", \"review\": \"# Evaluation of the Work\\n\\n## Summary of my review\\nOverall, the paper is interesting and mathematically well thought out. It represents an interesting approach. However, I do not see a link between this kind of learning method and trust in LLMs. I question the relevance of this work to the workshop while appreciating its general validity. I finally settled on a weak accept, as I do not believe it is a fit in the workshop topics presented (or anything related to trust), but I do not know what the workshop curators define as other relevant categories. \\n\\n## Quality \\n**Pros:** \\n- Mathematically justified and sound method\\n- Tested and validated methodology \\n\\n**Cons:** \\n- Limited testing. It feels as though the extreme summarization task is far too limited for the proposal. However, being a workshop paper, I understand that this methodology is likely still under test. \\n- No justification as to why extreme summarization is an acceptable/appropriate task. I am left with questions as to why you chose this task and why it is representative of the field. \\n\\n## Clarity \\n**Pros:** \\n- As an applied ML researcher, I was able to understand the difficult mathematical equations and the explanation of the methodology\\n- The introduction lays out the exact contribution of the paper, as well as the layout of the paper\\n\\n**Cons:** \\n- Perhaps for a Trust workshop, more time should be spent on emphasizing how this model conforms to the trust criteria laid out in the description\\n- I did not understand Figure 1b. To me, it looks a though accuracy decreases as the budget increases. This directly contradicts lines 200-204, explaining the figure. \\n- Occasionally, I felt that the explanations of the methodology, particularly those around hyper-parameter tuning, were too in-depth, while ignoring the aspects of the paper (Discussion and Experiment being 1 page with a figure, compared to the 4 pages spent explaining the methodology) \\n\\n## Originality \\nAs much as I would like to comment on the originality, I have never worked with BO. I do not have a substantial enough grasp of the literature. From my limited understanding, it seems relatively novel but mostly builds upon the LTT framework. Essentially, the point of the work is a justification for using the LTT framework more effectively. \\n\\n## Significance & Relevance \\nIn a broad sense, it falls under \\\"Error detection and correction,\\\" perhaps? However, I can not find any workshop topic that directly contributes to it. The paper, while significant in its own way, does not mention trust or LLM robustness. It overlooks the purpose of the workshop. I was hopeful that the discussion would explain the relationship of this paper to trust, but instead, it focused on future research directions.\", \"rating\": \"6\", \"confidence\": \"2\"}", "{\"decision\": \"Reject\", \"comment\": \"There is no comparison with Speculative Decoding where the large model can simply abstain based on the smaller model's uncertainty. The novelty of this work is limited to the best of my understanding given the well-known SD framework\", \"title\": \"Paper Decision\"}" ] }
i7WBUb3Dl9
XtraGPT: LLMs for Human-AI Collaboration on Controllable Scientific Paper Refinement
[]
The increasing volume of scientific publications highlights the growing need for high-quality academic writing. However, while groundbreaking ideas are often present, many papers fail to meet academic writing standards. Unlike open-ended applications of large language models (LLMs) in research, which delegate creative tasks to AI, we emphasize a human-centered approach where researchers provide ideas and drafts while LLMs strictly follow user instructions for refinement. All XtraGPT data, training and evaluation processes, and models will be open-sourced. We propose XtraGPT, LLMs designed to assist authors by delivering instruction-driven, context-aware revisions that (1) adhere to user instructions, (2) align with general academic writing standards, and (3) are consistent with the whole paper. Leveraging a dataset of 7,040 ICLR 24 papers and over 140,000 question-answer pairs, XtraGPT enhances specific sections without compromising the paper’s integrity. Experimental results show XtraGPT-7B surpass similar size models and is competitive with GPT-4o-mini in providing high-quality, context-aware refinements. We also found that scaling up model parameters provides limited improvement for the difficulty of paper scoring. Modifying six sections with XtraGPT can improve the paper’s rating according to the predictor. By prioritizing controllability in the task of paper refinement, XtraGPT empowers researchers to focus on innovation while relying on the system to handle the demands of academic writing with context understanding and adherence to academic standards and user instructions.
[ "Paper Refinement", "Large Language Models" ]
Reject
https://openreview.net/pdf?id=i7WBUb3Dl9
https://openreview.net/forum?id=i7WBUb3Dl9
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "voTACbOwbi", "dbAHqRsJ4y", "GfzAxcAyTu", "5xROJKWslr" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740894831441, 1741109822130, 1740815683184, 1740772118611 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission101/Reviewer_q4jg" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission101/Reviewer_deWm" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission101/Reviewer_7dvi" ] ], "structured_content_str": [ "{\"title\": \"Meaningful research topic with unclear paper writing\", \"review\": \"**Summary**:\\nThis paper presents the problem of how to use the LLM to assist humans to write academic papers. To solve this problem, authors collected a dataset called XtraQA of high-quality question-answer pairs. Additionally, they proposed a model, XtraGPT, of 7B model size to help paper writing. Their results show that the proposed model surpass the models of the similar size and be comparable to GPT-4o-mini.\\n\\n**Strengths:** \\n 1. The research topic (i.e., how to leverage LLMs to help academic paper writing) is interesting and meaningful.\\n\\n**Weaknesses:** \\n 1. The overall writing of the whole paper cannot achieve the level of an academic paper, which happens to be the topic of this paper (i.e., how to use LLMs to help paper writing). I don't believe this paper is ready to publish or is even ready for reviewers to review. \\n 2. Specially, I can point out several evidences: \\n (1) The first paragraph of the introduction section is unnecessary. \\n (2) In the abstract, the paper mentioned \\\"The XtraGPT training and evaluation ....\\\" (Line 17-18) even before mentioned the concept of XtraGPT (Line 19). \\n (3) In Line 072, it mentioned the controllable generation, but the meaning of it is not clearly and explicitly explained in the paper. \\n (4) In 085, the enumerate number \\\"3)\\\" was forgotten. \\n (5) In Line 309 to 323, there is no intuition and explanation why the LC win rate is defined as this way. It is invented by the authors, or it is a predefined concept? \\n (6) Figure 4 have overlapping x ticks for all four subplots, which makes it hard to see them clearly. \\n 3. After reading the whole paper, it is not very clear to me why I need to use XtraGPT to assist myself to refine my paper, and what its advantages compared to using ChatGPT (e.g., GPT-4o) for this purpose? \\n 4. The methodology used is not very sound. The paper is using LLM-as-a-Judge evaluation, but it doesn't discuss what the limitations (e.g., length bias, position bias, etc.) are for employing it and how to mitigate these limitations.\", \"rating\": \"3\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"comment\": \"Given the low relevance to the workshop and the weaknesses outlined by R1 and R2, I recommend rejection.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Mediocre results and clarifications needed to justify the paper's contributions\", \"review\": [\"Strengths:\", \"This paper introduces a fine-tuned model that can generate scientific writing and its training dataset.\", \"The released model is shown to be able to outperform existing models on the task.\"], \"weaknesses\": [\"Are there words missing in line 18? The grammar seems weird in this sentence \\u201cThe XtraGPT training and evaluation processes, and models will be open-sourced\\u201d. There are also quite some more grammar errors in the paper, which makes it hard to understand the paper sometimes.\", \"It seems like the training data was generated using GPT-4o-mini as described at line 188, but it is unclear whether the authors had done some quality checks over the generated data as the model can still hallucinate given the nonzero hallucination rate. It\\u2019ll be helpful if the authors can clarify this or include an agreement rate of the model\\u2019s generated data against human\\u2019s preference to establish more trust in their curated dataset. The authors seem to mention that some results on this are included in table 8 and table 9, but the captions of those tables were confusing, making it difficult to interpret the results without any more explanation in text.\", \"Visibility of Figure 2 should be improved.\", \"The paper only focused on one data distribution \\u2013 2024 ICLR papers. Since XtraGPT is proposed as a general scientific writing model, I wonder how the results shown in the paper can generalize to different distributions, especially when the model is trained using a technique like SFT. The authors should include some testing on unseen distributions to provide more practical insights.\", \"I\\u2019m not sure how significant the result is. The biggest contribution claimed in the paper is the model\\u2019s ability to supposedly help refine a paper. To demonstrate, the authors claimed that randomly \\u201crefining\\u201d one paragraph from each of the six sections in a paper using their model can improve the overall rating by 0.02, but I\\u2019m not sure whether an improvement of 0.02 in the overall rating can be called significant or not.\", \"I\\u2019d like to see a more in-depth analysis on which aspects the model outperforms human\\u2019s writing. For example, is it that the model generates text with better grammar? a better flow? or a more integrated discussion that better synthesizes important information from the context? This can help provide more useful actionable insights for users to know which sections/parts are the most useful to leverage AI to help. However, the paper only evaluated the model on papers undergone random replacement of paragraphs, which don\\u2019t really provide much actionable insights.\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Review\", \"review\": \"The paper introduces and explores a pretty interesting topic of leveraging large models for academic writing. It would be of certain contribution to have open-source models comparable to closed-source models in academic writing for further research study.\", \"strength\": [\"The idea of finer level control for generation is of certain importance. The authors define a clear flow of how we show guide LLM for generation\", \"The experiments and analysis propose some interesting findings.\"], \"weakness\": [\"For the first question in the paper, \\\"Why Can\\u2019t Existing LLMs Excel in Paper Generation?\\\", Apart from the reasons raised in the paper, one potential drawback of existing tools is that they cannot read tables and figures, nor generate tables or figures. However, \\\"Figures and tables are key sources of information in many scholarly documents.\\\"[1]. As a result, a potential weakness of the paper is that this critical issue is not discussed nor resolved though it attempts to figure out the reason why LLMs cannot excel in paper generation.\", \"The writing is more like a blog rather than an academic though the paper is trying to study the scheme of academic papers. For example, the quote at the beginning of the introduction is not of a certain importance and is pretty weird.\", \"[1]Clark, Christopher, and Santosh Divvala. \\\"Pdffigures 2.0: Mining figures from research papers.\\\" In Proceedings of the 16th ACM/IEEE-CS on Joint Conference on Digital Libraries, pp. 143-152. 2016.\"], \"rating\": \"7\", \"confidence\": \"4\"}" ] }
i31cKXiyim
UNLOCKING HIERARCHICAL CONCEPT DISCOVERY IN LANGUAGE MODELS THROUGH GEOMETRIC REGULARIZATION
[ "Ed Li", "Junyu Ren" ]
We present Exponentially-Weighted Group Sparse Autoencoders (EWG-SAE) that aims to balance reconstruction quality and feature sparsity whilst resolving emerging problem such as feature absorption in interpretable language model analysis in a linguistically principled way through geometrically decaying group sparsity. Current sparse autoencoders struggle with merged hierarchical features due to uniform regularization encouraging absorption of broader features into more specific ones (e.g., "starts with S" being absorbed into "short"). Our architecture introduces hierarchical sparsity via $K=9$ dimension groups with exponential regularization decay ($\lambda_k = \lambda_{base} \times 0.5^k$), reducing absorption while maintaining state-of-the-art reconstruction fidelity, sparse probing score, and decent $\ell_1$ loss. The geometric structure enables precise feature isolation with negative inter-group correlations confirming hierarchical organization.
[ "Al Safety", "Trustworthy AI", "Hierarchical Representation Learning", "Mechanistic interpretability", "Sparse Autoencoder", "Feature Absorption", "Feature Splitting" ]
Accept
https://openreview.net/pdf?id=i31cKXiyim
https://openreview.net/forum?id=i31cKXiyim
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "m4bXYSMRKZ", "hEV6g83Dip", "RYDziVSWmD" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740217191225, 1739985531332, 1741155520226 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission132/Reviewer_cYeX" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission132/Reviewer_1iys" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Simple and effective technique at improving SAE interpretability with grouped latents and geometrically decaying regularization\", \"review\": \"This paper introduces a novel training methodology for Sparse Autoencoders (SAEs) designed to improve interpretability by reducing feature absorption. The approach draws inspiration from linguistics, specifically the observation that broader concepts tend to appear more frequently than specific ones. The authors propose a structured organization of the SAE's latent space, dividing the latent dimensions into a series of \\\"groups\\\" of decreasing size. Smaller groups are intended to capture broader, more frequently activated concepts, while larger groups represent more specific, less frequently activated features.\\n\\nAnother key contribution is a modified regularization scheme. The authors apply a geometrically increasing regularization penalty across the groups. Smaller, \\\"broader\\\" groups receive a smaller penalty, encouraging their activation, while larger, \\\"more specific\\\" groups receive a stronger penalty.\\n\\nThe authors evaluate their approach using a series of benchmarks. They report achieving state-of-the-art feature absorption scores, suggesting improved interpretability, and demonstrate that the general model performance remains comparable to standard SAE training. These results will certainly be useful for the mechanistic interpretability community but it would be preferred if more information is included and presentation is improved.\", \"strengths\": [\"Modifies the training of SAEs that enables low feature absorption scores leading to better interpretability while maintaining performance\", \"The modification is also quite simple and can easily be used in practice without needing specialized libraries.\", \"Good justification from the linguistics literature for the usage of exponential decay of activations in groups\"], \"weaknesses\": [\"Unclear why 5 groups were used for experiments. It would be beneficial to test for other values of K as well\", \"How was the correlation calculated? The regular pearson correlation is defined for two variables but instead a single correlation is given for all 5 groups.\", \"Unclear how activation rate is measured. In particular, how exactly are the broad and specific features defined? Some examples are provided in the text but this is presumably not the whole list.\", \"A table is provided for the ablation study but there is no description in the text that mentions what exactly is being tested. It seems like it is measuring the difference between using linear decay and exponential decay but we do not know how the linear decay factor for the groups.\", \"(minor) Paper mentions 3 limitations which lead to the sparsity absorption tradeoff but only 2 are provided (line 34).\", \"(minor) The Gemma model is cited as \\\"Team et al\\\" but should preferably be \\\"Gemma Team\\\"\", \"(minor) I'm assuming that the sigma in the results (lines 255, 257) refers to the standard deviation but it might be better to be more explicit\", \"(minor) Possible typo on line 113. It is mentioned that d_{k+1} / d_k = 0.75 but this should be 0.5 if the same dimensions are used as in line 42 where each group is half the size of the preceding group.\"], \"comments\": [\"I was initially very confused with the usage of the word \\\"groups\\\" and thought that the paper would be using group theory from mathematics. It might be beneficial to use a different term to describe the separation of latents in the SAE. Perhaps a figure would also be useful in explaining the architecture.\"], \"rating\": \"7\", \"confidence\": \"3\"}", "{\"title\": \"Good idea but too similar to Matryoshka SAEs; Reject\", \"review\": [\"This paper introduces EWG-SAEs, which are constructed to learn hierarchical features by introducing various levels of SAE size and sparsity penalties. This approach is intended to aid with feature absorption, where independent concepts are combined into a single feature to \\u201chack\\u201d the sparsity penalty of SAEs.\", \"The main idea of this paper seems to be identical to Matryoshka SAEs, published on LessWrong on December 13th [1] and 19th [2]. This paper does not acknowledge Matryoshka SAEs, and does not consider them as a parallel work to compare EWG-SAEs to.\", \"In general, I think this is a strong idea with good theoretical underpinnings and is well evaluated. However, there is no acknowledgement of Matryoshka SAEs and no comparison of methods, resulting in a lack of novelty. Thus, I vote to reject.\", \"Strengths\", \"Uses nice motivations from linguistics to motivate the use of heirarchial SAEs as an analogy to heirarchial concepts in natural language\", \"Constructs an intuitive SAE architecture to take advantage of this insight\", \"The feature absorption is clearly better with an EWG-SAE than other methods\", \"I like the use of feature independence preservation as a decoupled gradient update in Equation 3, it seemed effective\", \"Weaknesses\", \"Biggest weakness is the lack of novelty due to the similarity to Matryoshka SAEs. There should be a discussion of how EWG-SAEs measure up against Matryoshka SAEs and a comparison of their theoretical underpinnings. From my understanding, these methods are virtually identical.\", \"Table 1 contains almost all of the key results of the paper, yet is very difficult to read. Please bold the best value in each row so it is easy to assess how EWG-SAE's stackup from a glance. It is currently far too many unmarked numbers to be digestible.\", \"While the paper claims to use ideas from linguistics, the main idea seems to be that concepts are heiarchial. It would be good to justify the use of hyperparemeters, like the 0.5 exponential decay rate in width using linguistics ideas.\", \"There is a lack of hyperparameter evaluation. How does performance change as the number of groups d increases? How sensitive are results to the exponent used in the sparsity function or the width decay rate?\", \"Minor points\", \"I think section 2.3 should come before section 2.2 (motivation before analytics)\", \"Questions\", \"I'm confused by the Model Performance/Behavior preservation sections of Table 1. For example, what is CE Loss Score, CE Loss Score with SAE, and without SAE? I don't think it is described in the paper.\", \"In 2.2 says that d_k = 0.5 d_k-1. But later says that d_k+1/d_k = 0.75. Why the discrepancy?\", \"[1] https://www.alignmentforum.org/posts/zbebxYCqsryPALh8C/matryoshka-sparse-autoencoders\", \"[2] https://www.lesswrong.com/posts/rKM9b6B2LqwSB5ToN/learning-multi-level-features-with-matryoshka-saes\"], \"rating\": \"4\", \"confidence\": \"4\"}", "{\"decision\": \"Accept\", \"comment\": \"This paper introduces Exponentially-Weighted Group Sparse Autoencoders (EWG-SAEs) to improve feature sparsity and reduce feature absorption in SAEs. The approach applies hierarchical sparsity with exponentially increasing penalties, inspired by linguistic principles. While the idea is interesting and well-evaluated, Reviewer 2 (R2) points out that Matryoshka SAEs, a closely related prior work, is not cited or compared against. Comparing with this prior-art would be helpful.\", \"title\": \"Paper Decision\"}" ] }
hxygS0e3Kx
Building Bridges, Not Walls: Advancing Interpretability by Unifying Feature, Data, and Model Component Attribution
[ "Shichang Zhang", "Tessa Han", "Usha Bhalla", "Himabindu Lakkaraju" ]
The increasing complexity of AI systems has made understanding their behavior and building trust in them a critical challenge, especially for large language models. Numerous methods have been developed to attribute model behavior to three key aspects: input features, training data, and internal model components. However, these attribution methods are studied and applied rather independently, resulting in a fragmented landscape of approaches and terminology. We argues that feature, data, and component attribution methods share fundamental similarities, and bridging them can benefit interpretability research. We conduct a detailed analysis of successful methods of these three attribution aspects and present a unified view to demonstrate that they employ similar approaches: perturbations, gradients, and linear approximations. Our unified view enhances understanding of attribution methods and highlights new directions for interpretability and broader AI areas, including model editing, steering, and regulation.
[ "Interpretability", "Attribution", "Explainability" ]
Accept
https://openreview.net/pdf?id=hxygS0e3Kx
https://openreview.net/forum?id=hxygS0e3Kx
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "Y0dQcBhbD8", "K91Z6VCMzH", "5N5owjKqhN", "2cWzK4tJV5" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741103151717, 1740017683053, 1740241820069, 1740823665639 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission36/Reviewer_m3NS" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission36/Reviewer_FJDV" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission36/Reviewer_PYEy" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"This paper provides an informative survey of techniques for feature, data, and model component attributions, identifying strong similarities between approaches across the three domains. I believe this unification is a necessary and significant contribution to the field of interpretability.\", \"review\": [\"### Strengths\", \"I found the categorization of past works into method clusters, particularly the rows in Table 2, to be highly informative and illuminating. The interpretability literature is vast and complex, but this survey paper makes significant strides in organizing it.\", \"The discussion allocated to each cluster is excellent, providing enough insights into these works without overwhelming the reader. The appendix provides a more thorough assessment of each of these papers, ideal for a practitioner to quickly learn about the various methods.\", \"### Weaknesses\", \"I believe the unification work of (Han et al., 2022) is in a very similar spirit to this paper (unification of attribution methods). The treatment of this citation in 3.4 is well handled, but I believe this citation should come earlier, perhaps in the introduction.\", \"Although this survey is extensive, I would have liked to see a little bit more discussion on interaction attributions, with some important works in this field cited. (e.g. How does This Interaction Affect Me? Interpretable Attribution for Feature Interactions)\", \"### Suggestions\", \"I would attempt to move Table 2 to the main body of the paper. As a researcher in the field, I found this table to be particularly informative. If vertical spacing is an issue, perhaps the table could be reduced to three rows (Perturbation, Gradient, and Linear Models), where the Methods are indicated in line with the text (ex. Occlusions (Zeiler & Fergus, 2014) [Direct])\", \"Line 17 of the abstract: *argues* -> argue\"], \"rating\": \"8\", \"confidence\": \"4\"}", "{\"title\": \"Unifying Feature, Data, and Model Component Attribution\", \"review\": \"The authors present a unifying framework for understanding feature, data, and model component attribution via perturbations, gradients, and linear approximations. While the position presented in the paper is interesting, I think the paper's contribution may not offer significantly new insights relative to the existing literature. The methods discussed -- perturbations, gradients, and linear approximations -- are fundamentally highly-correlated concepts in mathematics, and it is not very clear to me how this framework enhances our understanding of attribution methods.\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Unified view of feature, data, and component attribution methods\", \"review\": \"**Quality**\", \"pros\": [\"Identified potentially novel common concepts and challenges across types of attribution and proposes promising research directions\", \"Would help newcomers understand the the big picture\"], \"cons\": [\"Did not produce anything inherently novel as it is a survey paper\"], \"significance\": [\"Potential to advance interpretability research by synthesizing ideas and encouraging cross-aspect knowledge transfer\", \"Potentially benefit broader research in attribution due to its holistic perspective\", \"The techniques studied could soon be applicable in practice\", \"Unsure how much of this is novel/insightful\"], \"rating\": \"6\", \"confidence\": \"2\"}" ] }
hlj9zQiHFE
Opportunities and Challenges of Frontier Data Governance With Synthetic Data
[]
Synthetic data, or data generated by machine learning models, is increasingly emerging as a solution to the data access problem. However, its use introduces significant governance and accountability challenges, and potentially debases existing governance paradigms, such as compute and data governance. In this paper, we identify 3 key governance and accountability challenges that synthetic data poses - it can enable the increased emergence of malicious actors, spontaneous biases and value drift. We thus craft 3 technical mechanisms to address these specific challenges, finding applications for synthetic data towards adversarial training, bias mitigation and value reinforcement. These could not only counteract the risks of synthetic data, but serve as critical levers for governance of the frontier in the future.
[ "Synthetic Data", "Data", "AI Governance", "Accountability", "Trust", "Regulation", "Data Governance", "Bias", "Alignment" ]
Reject
https://openreview.net/pdf?id=hlj9zQiHFE
https://openreview.net/forum?id=hlj9zQiHFE
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "sqHRPha50V", "bsEGIUjFor", "RBHQETy030", "JNxMCUyS5H" ], "note_type": [ "official_review", "official_review", "official_review", "decision" ], "note_created": [ 1740734926688, 1740873559216, 1739906774844, 1741103729670 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission79/Reviewer_YBRs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission79/Reviewer_gfTU" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission79/Reviewer_Ya85" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"A valuable starting point for discussions on synthetic data governance but requires substantial revisions before publication\", \"review\": \"# Significance\\nThis paper explores the opportunities and challenges of frontier data governance using synthetic data. It identifies three key governance and accountability challenges: the increased emergence of malicious actors, spontaneous biases and value drift. These challenges are highly relevant to ongoing discussions in both technical and regulatory domains. \\n\\n# Overall Quality and Evaluation\\nThe paper is well-written and addresses an important topic. However, its motivation and methodology lack clarity. Specifically: \\n\\n- The methodology for selecting the three identified challenges is not well explained. \\n- While the paper claims to identify three opportunities, these are not clearly presented. \\n- The authors do not adequately connect the identified opportunities to existing governance mechanisms and frameworks. As a result, it remains unclear how these mechanisms can serve as \\\"critical levers for governance,\\\" as the authors suggest. \\n\\n# Suggestions for Improvement\\nThe governance of synthetic data intersects with multiple domains, including data, model, and compute governance. Attempting to cover such a broad scope within a four-page paper leads to gaps in the discussion. To improve clarity and depth, I suggest the following: \\n\\n- Focus on a specific sectoral use case, such as healthcare, where synthetic data governance is particularly critical. While Section 3.2 briefly discusses synthetic data in healthcare, it is the only concrete example in the paper. Expanding on this or choosing another well-defined use case would strengthen the analysis. \\n- Select and elaborate on specific governance characteristics (e.g., key actors such as data controllers, challenges like replicability, and governance mechanisms such as model security). A more structured approach would enhance the paper\\u2019s contribution.\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Review of \\\"Opportunities and Challenges of Frontier Data Governance With Synthetic Data\\\"\", \"review\": \"The paper summarizes the challenges and opportunities associated with using\\u00a0synthetic training data\\u00a0for large machine learning models. It focuses on three main aspects: (a)\\u00a0Synthetic adversarial examples, which can both\\u00a0perturb\\u00a0and\\u00a0robustify\\u00a0a model.\\n(b)\\u00a0Unskewed synthetic data, which may help\\u00a0mitigate biases\\u00a0but can also introduce\\u00a0new and different biases. (c)\\u00a0Synthetic data that adheres to agreed-upon values (but also potentially dangerous or undesirable) values.\\n\\n**Positive Aspects:**\\n- Provides an\\u00a0interesting overview\\u00a0of existing work on training with synthetic data and a\\u00a0collection of potential future developments.\\n- Clearly presents\\u00a0potential harms and mitigation strategies.\\n\\n**Negative Aspects:**\\n- As a\\u00a0short overview paper,\\u00a0the topics are\\u00a0not discussed in depth.\\n- In my view, the\\u00a0governance aspect remains somewhat ambiguous. The paper primarily presents\\u00a0possible good practices\\u00a0rather than\\u00a0politically actionable\\u00a0recommendations.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"The identified challenges are trivial\", \"review\": [\"The identified challenges are trivial and applicable to any training setting, not specific to synthetic data. Poisoning, lack of data diversity, and bias are well-known and well-studied issues.\", \"The authors argue that their advantage over prior work is technical solutions. However, the solutions are speculative, and there are no theoretical or empirical validation. The challenges introduced are too complicated to be addressed through speculations.\"], \"rating\": \"5\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}" ] }
h9zLRWS5uB
Unmasking Transformers: A Theoretical Approach to Data Recovery via Attention Weights
[]
In the realm of deep learning, transformers have emerged as a dominant architecture, particularly in both natural language processing and computer vision tasks. However, with their widespread adoption, concerns regarding the security and privacy of the data processed by these models have arisen. In this paper, we address a pivotal question: Can the data fed into transformers be recovered using their attention weights and outputs? We introduce a theoretical framework to tackle this problem. Specifically, we present an algorithm that aims to recover the input data $X \in \mathbb{R}^{d \times n}$ from given attention weights $W = QK^\top \in \mathbb{R}^{d \times d}$ and output $B \in \mathbb{R}^{n \times n}$ by minimizing the loss function $L(X)$. This loss function captures the discrepancy between the expected output and the actual output of the transformer. Our findings have significant implications for preventing privacy leakage from attacking open-sourced model weights, suggesting potential vulnerabilities in the model's design from a security and privacy perspective - you may need only a few steps of training to force LLMs to tell their secrets.
[ "Inversion Attack", "Data Privacy in LLMs", "Optimization" ]
Reject
https://openreview.net/pdf?id=h9zLRWS5uB
https://openreview.net/forum?id=h9zLRWS5uB
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "lABSGfbefE", "Yqa71Kf6OX", "Sg05VzVbUj", "0hYCnKEN24" ], "note_type": [ "official_review", "decision", "official_review", "official_review" ], "note_created": [ 1740998673212, 1741083004202, 1740771103714, 1740792389548 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission29/Reviewer_LxCb" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission29/Reviewer_yG8u" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission29/Reviewer_4WdU" ] ], "structured_content_str": [ "{\"title\": \"The paper presents a theoretically sound but impractically applicable model inversion attack for extracting training data from attention weights, lacking comparative evaluation against established baselines.\", \"review\": \"The authors introduce a model inversion technique that extracts confidential training data by analyzing attention weights alongside model outputs. Taking a theoretical perspective, this research demonstrates that adversaries can reconstruct private input data X through a process of loss function minimization when they have obtained the model's attention weights and output values.\", \"strength\": \"This research establishes a theoretical framework for reconstructing input data through analysis of attention weights and model outputs. The authors carefully examine the mathematical underpinnings of attention mechanisms to understand how sensitive information might be recovered from inputs, which helps people's understanding of the potential risk.\", \"weakness\": \"1. The experimental evaluation lacks comparison with established baselines from the membership inference attack domain, instead only reporting loss values and corresponding success rates. While many MIA benchmarks are known to suffer from distribution shift issues, including performance metrics on these benchmarks and comparisons with existing approaches would strengthen the paper's empirical validation.\\n2. The threat model assumes adversarial access to training data attention outputs but not the data itself, which appears to be an impractical assumption in real-world scenarios. This undermines the practical applicability of the proposed attack method.\", \"rating\": \"5\", \"confidence\": \"3\"}", "{\"decision\": \"Reject\", \"comment\": \"Reviewers have mentioned several concerns. In particular, the memorization one is significant.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review\", \"review\": \"The paper proposes an inversion attack to recover the input of a transformer model with given attention layers and outputs. The topic is of certain importance and interesting. However, the reviewer has some concerns.\\n\\n1. First of all, the authors should make it clear the model output $B$ is the logits of the transformer right after Softmax because the top-k selection process and de-tokenizing are non-differentiate. The gradient-wise inverse won't work if the outputs are token IDs or words if for LLMs (e.g. GPT2 in the paper). However, it is a scarce case that outputs will be available to an adversary.\\n2. Continuing on point one, it is quite confusing why the weights of attention layers are needed if the logits are available as the mathematic derivation should still stand even if starting from the final logits.\\n3. The reason why the weights of attention layers are available while the weights of other layers such as linear MLP layers are not available is missing.\\n4. One common case that both outputs and weights will be leaked to attackers is federated learning, which can be mentioned in the paper\\n\\nDespite these, the paper still elaborates how the gradient flow goes over during the inverse attack (gradient matching) based and raises an attention to the privatizing weights and logits of a transformer model.\", \"rating\": \"6\", \"confidence\": \"5\"}", "{\"title\": \"Important problem, but the method and the paper has several issues\", \"review\": \"The paper frames the data recovery problem as an inverse regression on the transformer\\u2019s attention mechanism. It assumes the model\\u2019s weights and attention matrices for a given input are known and seek to recover the input based on this information, by directly optimizing the input vectors.\", \"i_have_several_concerns_with_the_paper\": \"\", \"methodology_and_experimental_setup\": \"1. Small Fine-Tuning Dataset\\n\\nThe paper fine-tunes a GPT-2 model on only \\u201chundreds\\u201d of data points, which is a trivially small scale by GPT-2 standards. This severely limits the credibility of the experiments: with such a small dataset, the model might simply be memorizing most or all of the training samples. Consequently, it is unclear whether the observed data-recovery phenomenon is due to a genuine architectural vulnerability or just the result of a grossly overfitted, memorizing model. If the latter, then the practical implications for larger-scale language models\\u2014trained on millions or billions of tokens\\u2014remain unexamined.\\n\\n\\n2. Number of Tokens Recovered\\n\\nThe paper showcases only one example where two masked tokens are reconstructed. Although this example confirms the feasibility of recovering a short phrase, there is no systematic evaluation on how many tokens can be recovered in practice, or how model accuracy might degrade as the number of targeted tokens grows. More studies, possibly with an increasing number of masked tokens, would be needed to determine whether the method scales or quickly fails as the masked region lengthens.\\n\\n\\n3. Performance Post Fine-Tuning\\n\\nThere is no discussion of whether the model maintains or loses its capabilities on the original GPT-2 tasks after this specialized fine-tuning. If the model is merely overfitted to producing memorized text strings, that might degrade other performance metrics. Without assessing generalization or broader task performance, we cannot be sure whether this \\u201cattack\\u201d is robust or simply overwriting the model with new data in a way that undermines its overall usefulness.\\n\\n\\n4. Access to Non-Masked Training Data\\n\\nThe approach assumes that the attacker has access to the unmasked portion of the data, plus the model\\u2019s attention mechanisms. This is a fairly strong assumption - the attack is essentially a white-box attack. Many real-world settings may not expose partial ground truth or the full set of attention weights. Hence, the practicality of the threat model is doubtful. Under less privileged conditions, the method may be far less effective. Realistically, an attacker might not know partial ground-truth tokens. This assumption can inflate attack success rates.\\n\\n\\n5. Mapping Vectors to Tokens\\n\\nThe authors do not explain how the final optimized continuous vectors are mapped back to discrete tokens\\u2014whether they use a nearest-neighbor approach in embedding space, an argmax over vocabulary logits, or some other heuristic. Omitting these details leaves a gap: the success of data recovery might hinge on how effectively the recovered vectors are snapped to valid tokens.\\n\\n$$----------------$$\\n\\nTheoretical Oversights - Ignoring Residual and Feed-Forward Layers\\n\\nThe Hessian analysis focuses exclusively on the self-attention sub-layer and treats the architecture as though there are no additional non-linearities (i.e., feed-forward components), normalization layers, or skip connections. Real GPT-like architectures heavily rely on these other components, which introduce significant complexity, especially in the model\\u2019s gradient and Hessian structure. Omitting them may invalidate or oversimplify any claims about invertibility in actual GPT architectures.\\n\\n$$----------------$$\\n\\nRelated Works\\n\\n1. Connection to Memorization\\n\\nLarge language models have been known to memorize parts of their training data, especially if the data is unique or repeated. This paper\\u2019s data recovery approach essentially exploits that memorization phenomenon, yet the authors do not mention or position their work in the context of \\u201cLLM memorization.\\u201d Understanding how memorization emerges\\u2014and how it might be mitigated\\u2014would be a vital part of any serious discussion about data leakage. Failure to cite or discuss memorization literature is a significant gap.\\n\\n2. Similarities to Existing Recovery Techniques\\n\\nRecovering training data from model parameters, gradients, or partial outputs has been extensively studied, especially in computer vision. Leveraging attention weights is, in principle, not too different from leveraging CNN filters or other forms of latent representations. The paper does not sufficiently acknowledge these works, leaving its \\u201cnovel\\u201d position questionable.\\n\\n$$--------------------$$\\n\\nScalability and Real-World Feasibility\\n\\nBecause the experiments are conducted only on a small set of data points, it is unclear whether this method would work at scale for real GPT-2 style models trained on massive corpora. If the method solely exploits memorization from a tiny fine-tuned dataset, it may not extend to scenarios where the model\\u2019s parameters are shaped by vast and diverse training sets.\\n\\n\\nGiven these severe concerns, I cannot recommend accepting the paper in its current form.\", \"rating\": \"3\", \"confidence\": \"4\"}" ] }
h4uj9WcdLs
Mitigating Unintended Memorization with LoRA in Federated Learning for LLMs
[]
Federated learning (FL) is a popular paradigm for collaborative training which avoids direct data exposure between clients. However, data privacy issues still remain: FL-trained large language models are capable of memorizing and completing phrases and sentences contained in training data when given with their prefixes. Thus, it is possible for adversarial and honest-but-curious clients to recover training data of other participants simply through targeted prompting. In this work, we demonstrate that a popular and simple fine-tuning strategy, low-rank adaptation (LoRA), reduces memorization during FL up to a factor of 10. We study this effect by performing a medical question-answering fine-tuning task and injecting multiple replicas of out-of-distribution sensitive sequences drawn from an external clinical dataset. We observe a reduction in memorization for a wide variety of Llama 2 and 3 models, and find that LoRA can reduce memorization in centralized learning as well. Furthermore, we show that LoRA can be combined with other privacy-preserving techniques such as gradient clipping and Gaussian noising, secure aggregation, and Goldfish loss to further improve record-level privacy while maintaining performance.
[ "Machine Learning", "Federated Learning", "Large Language Models", "Privacy", "Memorization" ]
Reject
https://openreview.net/pdf?id=h4uj9WcdLs
https://openreview.net/forum?id=h4uj9WcdLs
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "wWloPc0X1O", "fuezSNPqEj", "a0lgrmQ4eP", "78GgOKq1TX" ], "note_type": [ "official_review", "official_review", "decision", "official_review" ], "note_created": [ 1740036474288, 1740885356361, 1741078480080, 1740488918658 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission34/Reviewer_j6op" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission34/Reviewer_JFsy" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission34/Reviewer_8b4M" ] ], "structured_content_str": [ "{\"title\": \"Technical Limitation exists\", \"review\": \"Experiments primarily focus on Llama 2 and Llama 3 models, lacking validation on other mainstream architectures (e.g., GPT, BERT, PaLM). This raises concerns about whether the findings generalize across diverse model families.\\n\\nThe use of medical records as synthetic \\u201ccanaries\\u201d aligns with the medical application scenario but fails to test memorization of other sensitive data types (e.g., financial records, personal identifiers). This limits the broader applicability of the conclusions.\\n\\nThe privacy-preserving effect of LoRA is not rigorously quantified. Metrics like differential privacy (DP) parameters (\\u03b5, \\u03b4) or empirical privacy budgets are absent, leaving the actual privacy guarantees unclear.\\n\\nWhile the paper observes that LoRA reduces memorization, it does not deeply investigate why this occurs. Simply attributing it to \\u201creduced trainable parameters\\u201d is insufficient. Critical unanswered questions include:\\n\\nDoes LoRA\\u2019s low-rank update structure inherently limit overfitting to specific training samples?\\nHow do low-rank adaptations alter the model\\u2019s representation space to suppress memorization?\\nIs memorization reduction tied to the rank of LoRA\\u2019s update matrices?\", \"rating\": \"4\", \"confidence\": \"3\"}", "{\"title\": \"Paper Reivew\", \"review\": \"> Summary:\\n\\nThis paper investigates the unintended memorization problem in FL-trained large language models settings and demonstrates that LoRA can be potentially an effective approach in reducing memorization.\\n\\n> Pros:\\n\\n1.The study explores the role of LoRA in mitigating unintended memorization in LLMs, which is an interesting and timely research topic with significant implications for privacy in federated learning and beyond.\\n\\n2.The paper is well-structured and presents its ideas with clarity, making it easy to follow.\\n\\n> Cons:\\n\\n1.One major weakness of the paper is that while it demonstrates LoRA's potential effectiveness in reducing memorization in both CL and FL settings, it does not delve into the underlying reasons or provide potential explanations for this effect, which limits the depth of the study. \\n\\n2.In Figure 8, the memorization behaviors across different models vary under the same training conditions (full fine-tuning / LoRA training) in both CL and FL settings. However, the authors do not offer an in-depth explanation of these results.\", \"rating\": \"5\", \"confidence\": \"4\"}", "{\"decision\": \"Reject\", \"comment\": \"This paper explores unintended memorization in FL-trained LLMs and finds that LoRA reduces memorization with minimal utility loss, particularly in the medical domain. However, the framing needs refinement, as it lacks broader validation across models and datasets, does not rigorously quantify privacy gains, and omits connections to prior work demonstrating similar findings.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Interesting core finding with limitations in framing.\", \"review\": \"I find the core finding of <<LoRA memorizing less>> interesting, and the results on the medical domain showing that this \\\"gain in privacy\\\" comes at virtually no cost in utility is promising. However, I find the current framing of the paper a bit problematic/too constraining:\\n\\n- First, there is no mention of Biderman et al. (2024), even though it is a prominent paper showing a very related result on LoRA. This paper can then be viewed as basically an extension to that, explicitly investigating memorization. As Biderman et al. (2024) already showed that LoRA \\\"learns less\\\", framing the memorization results correctly w.r.t. to their findings is also crucial for assessing the novelty of the findings of this paper.\\n- This leads onto my second concerns on the framing; while I understand that the medical domain makes memorization concerns immediately clear, the experiments could use datasets from other domains as well, and then the framing of the paper could be widened to reach a broader audience.\\n- Similarly, I do not quite understand the need of committing to Federated Learning in the title and in the intro so early, especially that half of the main experiments revolve around the centralized setting.\\n- Finally, I find the message that is at some points suggested, i.e., that using LoRA for the benefit of less memorization a bit problematic. While in the experiments of the authors there is no significant utility loss, this is not a general truth, as also shown in Biderman et al. (2024). Therefore, the message has to be more nuanced here.\\n\\nIn general, I think the paper could be framed more generally as \\\"LoRA memorizes less\\\", leading to potentially a much larger reach and impact, and would benefit from a clear connection to Biderman et al. (2024).\\n\\n**References**\\n\\nD Biderman et al., LoRA Learns Less and Forgets Less. TMLR 2024.\", \"rating\": \"5\", \"confidence\": \"4\"}" ] }
fSAIDcPduZ
VideoJail: Exploiting Video-Modality Vulnerabilities for Jailbreak Attacks on Multimodal Large Language Models
[ "Wenbo Hu", "Shishen Gu", "Youze Wang", "Richang Hong" ]
With the rapid development of multimodal large language models (MLLMs), an increasing number of models focus on video understanding capabilities, while overlooking the security implications of the video modality. Previous studies have highlighted the vulnerability of MLLMs to jailbreak attacks in the image modality. This paper explores the impact of the video modality on the secure alignment of MLLMs. We conduct a systematic empirical analysis of the harmlessness performance of representative MLLMs, revealing vulnerabilities introduced by video input. Motivated by these findings, we propose a novel jailbreak method, VideoJail, which leverages video generation models to amplify harmful content in images. By using carefully crafted text prompts, VideoJail directs the model's attention to malicious queries embedded within the video, successfully breaking through existing defense mechanisms. Experimental results show that VideoJail is highly effective in jailbreaking even the most advanced open-source MLLMs, achieving an average attack success rate (ASR) of 96.53\% for LLaVA-Video and 96.00\% for Qwen2-VL. For closed-source MLLMs with harmful visual content detection capabilities, we take advantage of the dynamic characteristics of the video modality, using a jigsaw-based approach to cleverly bypass their secure alignment mechanisms, achieving an average attack success rate of $92.13\%$ for Gemini-1.5-flash.
[ "Jailbreak Attack", "Video LLMs" ]
Accept
https://openreview.net/pdf?id=fSAIDcPduZ
https://openreview.net/forum?id=fSAIDcPduZ
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "UaG8mrA9Ji", "RjQ0udFXji", "60Irc5A9II", "1Zf95Q2Yah" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741109463794, 1741064192000, 1739685953709, 1741053742945 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission54/Reviewer_6QMf" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission54/Reviewer_rdPb" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission54/Reviewer_A961" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"comment\": \"The reviewers agree that the topic is important and underexplored, making this paper a strong fit for the workshop. The paper is praised for its comprehensive empirical evaluation, clear presentation, and novel attack techniques. Areas for improvement include a deeper analysis of why models are vulnerable to video-based attacks and a discussion on potential defenses to make the work more constructive. Overall, the paper addresses a hard and crucial problem in MLLM security, making it a valuable contribution to AI safety research.\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of \\\"VideoJail: Exploiting Video-Modality Vulnerabilities for Jailbreak Attacks on Multimodal Large Language Models\\\"\", \"review\": \"### **Paper summary**\\n\\nThis paper focuses on the role of the video modality in multimodal large language models (MLLMs) with respect to adversarial attacks, specifically jailbreaks. The authors\\u2019 contributions are twofold: (1) they conduct a systematic analysis of the video modality\\u2019s influence in the harmlessness performance of MLLMs, and (2) propose a novel jailbreak method for MLLMs, VideoJail, that uses a combination of typographic image creation, video generation, a tailored adversarial text prompt, and for closed-source MLLMs, an additional image processing step to bypass built-in alignment mechanisms. Their analysis demonstrates unique and significant vulnerabilities in the video modality, and the strong ability for VideoJail to exploit these vulnerabilities (>90% ASR on both open-source and proprietary MLLMs).\\n\\n### **Reasons to accept**\\n\\n1.\\tThis paper demonstrates novelty in exploration of jailbreak vulnerabilities and crafting jailbreak methods that are specific to the video modality (and non-trivial when extending from the image modality).\\n2.\\tThe empirical study of the role of the video modality in MLLM jailbreak attacks is insightful, and thorough in its exploration of model architecture, video framerate, model size, and comparison to the image modality.\\n3.\\tThe evaluation of VideoJail, encompassing various model architectures, safety dimensions, baselines, and MLLM types (open vs closed source) is comprehensive and sufficient.\\n\\n### **Reasons to reject**\\n\\n(none)\\n\\n### **Comments for authors**\\n\\n1.\\tOverall, this is a very strong, well motivated, and well organized paper. The remainder of my comments consist of requests for clarifications or suggestions for readability.\\n2.\\tFor the calculation of ASR, would it be possible to clarify the substring lookup and GPT-3.5 evaluation are aggregated / used together to compute the final ASR?\\n3.\\tWhy was 2 seconds chosen for the video length? Would the video length matter in the same way that the framerate (Figure 4b) does for varying the ASR?\\n4.\\tFor VideoJail-Pro, are the four frames (line 411) the only four frames in the video? How does this relate to / be consistent with the 2 second video length and varying frames per second described in Section 2?\\n5.\\tFor VideoJail-Pro, is it possible to present baseline ASRs for the closed-source MLLMs (as in Table 1)? This would illustrate the change in ASR in addition to the results in Figure 8b.\\n6.\\tTypos on the following lines: 38, 235, 250\\n7.\\tT_fig (first used in Equation 1, line 255) is not defined earlier in the paper.\\n8.\\tT_vidjail (introduced in lines 307-309) should be defined in the text (rather than Figure 5) to enhance readability.\\n9.\\tThe term Q is defined on 3 different equations; I suggest making them distinct terms to avoid potential confusion in notation.\\n10.\\tI suggest changing \\u201cfour experimental setups\\u201d (line 324) to \\u201cfive experimental setups\\u201d to be consistent with the number of rows per model in Table 1.\", \"rating\": \"8\", \"confidence\": \"4\"}", "{\"title\": \"Review of the paper\", \"review\": \"## Summary\\nThis paper investigates the security risks associated with the use of video inputs in multimodal large language models (MLLMs), an area that has not been thoroughly explored in prior research. Through detailed empirical analysis, the paper highlights how the inclusion of video data creates specific vulnerabilities that can be exploited in jailbreak attacks. The authors then introduce VideoJail, an innovative attack method that utilizes video generation models to enhance the harmful effects of static images, guiding the model's focus toward malicious queries embedded within the video content. To address the defenses in closed-source MLLMs, which are designed to detect harmful content, the paper also presents an enhanced version, VideoJail-Pro. This advanced approach takes advantage of the temporal and dynamic aspects of video, employing a jigsaw-like frame rearrangement technique to successfully bypass safety mechanisms and improve attack efficacy.\\n\\n## Strengths\\n**1. The topic is crucial and unexplored:** With the rapid development of MLLMs, safety issues have become increasingly important. The paper analyzes the impact of different types of inputs on the toxicity of MLLM outputs, providing a comprehensive viewpoint in this field.\\n\\n**2. The attack method is simple yet effective:** This paper presents two jailbreak attack methods: VideoJail and VideoJail-Pro. Both novel methods are not complicated but have proven to be quite effective through experiments.\\n\\n**3. The experiments are comprehensive and persuasive:** The paper provides numerous experiments on different types of state-of-the-art (SOTA) MLLMs, including both open-source and closed-source models, making the results more reliable and persuasive.\\n\\n## Weaknesses\\n**1. Deeper analysis of experimental results:** Although the article mentions the attack success rate (ASR), a deeper discussion of why these results occurred would be helpful. More details on the models\\u2019 responses and why certain models are more vulnerable to video inputs, or why increasing model parameters doesn\\u2019t always improve defenses, could add more value.\\n\\n**2. Discussion of security measures:** While VideoJail-Pro's attack method is discussed, it would be beneficial to further explore how to strengthen defenses against such attacks. This would not only focus on the attack itself but also provide suggestions for future improvements, making the article more constructive.\", \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Strong video jailbreaking results and analysis on multimodal language models\", \"review\": [\"### Summary\", \"This work conducts an empirical analysis on the vulnerabilities of multimodal LLMs in the context of video, and demonstrates the vulnerabilities introduced with the inclusion of this modality. They propose *VideoJail* (and an extension to it) which in a black-box threat model, can successfully jailbreak most open-sourced MLLMs, as well as closed-sourced (to a somewhat lesser extent).\", \"### Strengths\", \"Strong results between VideoJail and VideoJailPro across all models tested\", \"The approach of tiling up the inputs in the \\u201cjigsaw game\\u201d for VideoJailPro is interesting\", \"Comprehensive evaluation of a variety of MLLMs w.r.t. their vulnerability to video-based jailbreak attacks, across a variety of jailbreak input formats\", \"Well written, clear, and easy to follow\", \"### Weaknesses\", \"The approach seems very sensitive to the formatting of the text prompts, but it does seem to consistently work across a variety of MLLMs; this is an interesting finding but perhaps could be a brittle point of the attack algorithm\", \"### Questions/Comments\", \"Most models (other than Qwen2-VL-72B) seem to have comparable ASRs between $I_{typ} + T_{vidjail}$ and $V_{typ} + T_{vidjail}$; if I understood correctly, $I_{typ}$ was a static input image which embeds the harmful text; as I\\u2019m not as familiar with safety research in the context of vision/language models, are non-video VLLMs similarly vulnerable to comperably simple attacks? Or does the inclusion of video as an input modality open up the attack surface in a way that makes the model more brittle to these sorts of attacks? Is this something we could expect to improve with better (safety) training, potentially as we get more \\u201ccommunity sourced red-teaming\\u201d efforts?\"], \"rating\": \"7\", \"confidence\": \"3\"}" ] }
fMFwDJgoOB
Truthfulness in LLMs: A Layer-wise Comparative Analysis of Representation Engineering and Contrast-Consistent Search
[]
The rapid advancement of Large Language Models (LLMs) has intensified the need for greater transparency in their internal representations. This study presents a layer-wise analysis of truthfulness storage in LLMs, comparing two state-of-the-art knowledge probing methodologies: Representation Engineering (RepE) and Contrast-Consistent Search (CCS). Our goal is to isolate truthfulness, defined as the factual accuracy of LLM outputs, from general knowledge encoded across model layers and to examine where and how this information is stored. RepE applies low-rank transformations within the model’s internal vector space, while CCS leverages pre-trained fixed vectors with an additional transformation layer to define truthfulness. Through experiments on Google’s Gemma models, evaluated across five diverse datasets, we find that truthfulness is embedded within pre-trained LLMs and can be amplified by specific input words. Our analysis reveals general trends in truthfulness storage and transferability, with CCS demonstrating greater stability in assessing truthfulness, while RepE exhibits potential in deeper layers but requires further refinement. Surprisingly, the truthfulness differences in the final layer, often considered the most critical, were statistically insignificant. This study provides empirical insights into the internal encoding of truthfulness in LLMs, highlighting the strengths and limitations of representation based transparency methods.
[ "Truthfulness", "Large Language Models", "LLM Transparency", "Representation Engineering", "Contrast-Consistent Search", "Interpretability" ]
Reject
https://openreview.net/pdf?id=fMFwDJgoOB
https://openreview.net/forum?id=fMFwDJgoOB
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "sbZdzulF5R", "qwSGhJYtL3", "Z2k89uPpKD" ], "note_type": [ "official_review", "official_review", "decision" ], "note_created": [ 1740957404049, 1740176715137, 1741103546284 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission73/Reviewer_ZRrb" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission73/Reviewer_Povu" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ] ], "structured_content_str": [ "{\"title\": \"Review\", \"review\": \"This paper applies existing knowledge probing methods, RepE and CCS, to analyze truthfulness representations across layers in Gemma models using five datasets. The experiments examine token position effects, layer-wise truthfulness encoding, and cross-dataset transferability.\", \"strengths\": [\"It provides a systematic comparison between the two methods.\", \"It offers useful insights into how truthfulness representations develop across model layers.\"], \"weaknesses\": [\"The paper lacks novelty as it primarily applies existing techniques rather than introducing new methods.\", \"The authors do not adequately position their findings within the broader literature on LLM truthfulness.\", \"The methodology would be stronger with baseline comparisons.\", \"This work resembles a course project rather than advancing the research frontier. Additional experiments, novel methods, and clearer implications for LLM development would enhance its contribution to the field.\"], \"rating\": \"4\", \"confidence\": \"3\"}", "{\"title\": \"Layer-wise analysis of Truthfulness directions in LLMs\", \"review\": \"The authors perform layer-wise comparative analysis of truthfulness directions in LLMs using two probing methodologies: RepE and CCS. While the results are interesting, the paper does not address important prior works that have identified and analyzed the truthfulness direction of LLMs across different layers:\\n\\n(1) Marks, S., & Tegmark, M. (2023). The geometry of truth: Emergent linear structure in large language model representations of true/false datasets.\\n\\n(2) Liu, Junteng, et al. (2024). \\\"On the universal truthfulness hyperplane inside llms.\\\"\\n\\nWhile comparing different probing methods to find the truthfulness direction is interesting, I believe that the presented work overlaps significantly with existing literature.\", \"rating\": \"4\", \"confidence\": \"5\"}", "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}" ] }
fB9zOpy98o
BaxBench: Can LLMs Generate Correct and Secure Backends?
[ "Mark Vero", "Niels Mündler", "Victor Chibotaru", "Veselin Raychev", "Maximilian Baader", "Nikola Jovanović", "Jingxuan He", "Martin Vechev" ]
The automatic generation of programs has long been a fundamental challenge in computer science. Recent benchmarks have shown that large language models (LLMs) can effectively generate code at the function level, make code edits, and solve algorithmic coding tasks. However, to achieve full automation, LLMs should be able to generate production-quality, self-contained application modules. To evaluate the capabilities of LLMs in solving this challenge, we introduce BaxBench, a novel evaluation benchmark consisting of 392 tasks for the generation of backend applications. We focus on backends for three critical reasons: (i) they are practically relevant, building the core components of most modern web and cloud software, (ii) they are difficult to get right, requiring multiple functions and files to achieve the desired functionality, and (iii) they are security-critical, as they are exposed to untrusted third-parties, making secure solutions that prevent deployment-time attacks an imperative. BaxBench validates the functionality of the generated applications with comprehensive test cases, and assesses their security exposure by executing end-to-end exploits. Our experiments reveal key limitations of current LLMs in both functionality and security: (i) even the best model, OpenAI’s o1, achieves a mere 60% on code correctness; (ii) on average, we could successfully execute security exploits on more than half of the correct programs generated by LLMs; and (iii) in less popular backend frameworks, models further struggle to generate correct and secure applications. Progress on BaxBench signifies important steps toward autonomous and secure software development with LLMs.
[ "large language model", "large language models", "LLM", "code generation", "code security", "security", "benchmark" ]
Accept
https://openreview.net/pdf?id=fB9zOpy98o
https://openreview.net/forum?id=fB9zOpy98o
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "yx6YGvRAhG", "iaGsLuJ4qL", "b6uaJkHjZI" ], "note_type": [ "official_review", "decision", "official_review" ], "note_created": [ 1740867322739, 1741100046993, 1739928221666 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission81/Reviewer_T8y8" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission81/Reviewer_apoQ" ] ], "structured_content_str": [ "{\"title\": \"Evaluating LLMs for Secure Backend Code Generation\", \"review\": \"Summary\\n\\nThe paper presents a benchmark for evaluating LLMs' capability in application backend code generation. It includes 28 scenarios, each describing an application requirement, API specification, environment-specific instructions, and database requirements. The generated code is evaluated through functional tests and security assessments.\\n\\nFor security evaluations, they manually craft common exploit code targeting specific CWEs for each scenario. Their results show that current state-of-the-art (SOTA) models still struggle with correctness, achieving only around 60% accuracy. Furthermore, nearly half of the code that passes functional tests contains vulnerabilities. They also demonstrate that prompting with security hints can help reduce vulnerabilities but does not fully mitigate the issue.\\n\\nStrength\\n\\nA very useful benchmark for evaluating the LLM's capability on backend code across 14 very popular backend framework. Security is a big issue as more practitioner using LLM to quickly generate backend code from scratch. This work can raise awareness on the security vulnerability issue when developers use LLM tools without aware of the security risk.\\n\\nWeakness\\n\\nit would be interesting to also test LLM agent's performance on the benchmark as many practioners' nowadays are using agent to develop.\", \"rating\": \"6\", \"confidence\": \"3\"}", "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review\", \"review\": \"This paper introduces BAXBENCH, a novel benchmark designed to evaluate the capabilities of LLMs in generating accurate and secure backend applications. BAXBENCH combines 28 scenarios and 14 frameworks and evaluates 10 state - of - the - art LLMs using functional tests and security exploits. The findings of this paper offer a valuable benchmark and insightful experimental results.\\n\\n**Questions:**\\n\\n1. Given that the paper's experiments used general models, would there be performance discrepancies for code-domain fine-tuned models when subjected to the BAXBENCH benchmark proposed herein\\n2. Can the single - run experiment results precisely represent the models' performance stability?\\n3. Are there any schemes or approaches to dynamically update the security exploit code library to keep pace with the constantly changing security environment?\", \"rating\": \"7\", \"confidence\": \"4\"}" ] }
ePE9BMoh8L
A Benchmark for Scalable Oversight Mechanisms
[]
As AI agents surpass human capabilities, scalable oversight -- the problem of effectively supplying human feedback to potentially superhuman AI models -- becomes increasingly critical to ensure alignment. While numerous scalable oversight mechanisms have been proposed, they lack a systematic empirical framework to evaluate and compare them. While recent works have tried to empirically study scalable oversight mechanisms -- particularly Debate -- we argue that they contain methodological flaws that limit their usefulness to AI alignment. We introduce the scalable oversight benchmark, a principled framework for evaluating human feedback mechanisms based on our agent score difference (ASD) metric, a measure of how effectively a mechanism advantages truth-telling over deception. We supply a Python package to facilitate rapid and competitive evaluation of scalable oversight protocols on our benchmark, and conduct some a demonstrative experiment benchmarking Debate.
[ "scalable oversight", "debate. alignment" ]
Reject
https://openreview.net/pdf?id=ePE9BMoh8L
https://openreview.net/forum?id=ePE9BMoh8L
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "lnA51Y0M1S", "XGuVgobshs", "UQ2Zl8Bic6" ], "note_type": [ "decision", "official_review", "official_review" ], "note_created": [ 1741079314542, 1740864773572, 1740938683394 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission110/Reviewer_G33D" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission110/Reviewer_eVhR" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"comment\": \"The paper lacks theoretical formalization to support its proposed Agent Score Difference (ASD) metric, making its arguments appear ad-hoc and intuitive rather than rigorously justified. Additionally, it fails to present concrete experiments or implementation details beyond a superficial class interface, making it difficult to evaluate its claims, especially given that it purports to introduce a new benchmark without empirical validation.\", \"title\": \"Paper Decision\"}", "{\"title\": \"This paper presents a benchmark for evaluating scalable oversight mechanisms, introducing the Agent Score Difference (ASD) metric as a more robust alternative to judge accuracy. The authors provide a Python library (SOlib) for systematic experimentation and demonstrate its use in evaluating the Debate protocol.\", \"review\": [\"## Strengths\", \"The paper introduces the Agent Score Difference (ASD) metric, which directly measures how much a scalable oversight mechanism favors truthfulness over deception. ASD is calculated with the given formula: $ASD = \\\\log p_{\\\\top} - \\\\log p_{\\\\bot}$. This is a significant improvement over previous methodologies that relied solely on judge accuracy.\", \"The inclusion of the SOlib Python package enables researchers to systematically evaluate scalable oversight protocols, lowering the barrier for further experimentation in AI alignment.\", \"The paper models real-world scenarios where advanced AI systems may access external tools to improve their responses by including AI tool use in the evaluation framework.\", \"## Weaknesses\", \"The Related Works section could be more descriptive of other proposed mechanisms for scalable oversight. Currently, it is vague and only glosses over prior research. Since the paper frequently critiques discrepancies in other methodologies, a more thorough discussion of alternative approaches would strengthen its comparative analysis.\", \"The paper lacks a theoretical proof that high ASD values correlate with robust long-term AI alignment, particularly in adversarial or deceptive AI settings.\", \"The benchmark is tested only in controlled AI simulation environments and does not incorporate human-AI interaction studies\", \"The paper critiques prior work for using Consultancy as a weak baseline, yet it still compares its ASD metric to judge accuracy in certain cases, which may introduce similar issues in evaluating effectiveness.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"Incomplete Work and Unclear Contributions\", \"review\": \"This paper provides a framework to compare scalable oversight mechanisms. This is an important problem as we observe the increasing capabilities of large language models from tool use and increased scaling. First, the paper introduces an \\\"agent score difference\\\" metric for evaluating scalable oversight mechanisms. They argue that existing baselines of random consultancy are conceptually flawed and provide arguments in the case where the judge always believes the agent. Moreover, they argue for a more appropriate mechanism in measuring how much the agent is incentivized to provide the true answer. Their proposed arguments could be written with additional clarity and potentially would benefit from theoretical formalization. As it stands, the arguments appear ad-hoc and intuitive, and the authors do little to make any formal claims that their claimed metric would lead to superior alignment. They then introduce a library to enable the testing of their method. However, though they claim to propose an additional benchmark, their implementation details are minimal and superficial -- describing only a class interface without situating it in any real scalable oversight task. Moreover, though they claim this to be a benchmark, there are no concrete experiments conducted in the work and they provide no justifications for their claims besides intuitive argument. They speculatively promise experiments in a future or camera ready version of the paper but it is not possible to evaluate this paper without any experimental results. Besides, the manuscript has incomplete sentences and a minimal appendix. For this reason, I believe it would be most appropriate for this paper to be submitted at a time where the authors can provide more extensive and concrete results -- especially given that this paper purports to introduce a baseline. It is not currently in the shape to be evaluated under peer review.\", \"rating\": \"3\", \"confidence\": \"4\"}" ] }
d8LFGLGMRA
SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language Models via Selective Layer-Wise Model Merging
[ "Aladin Djuhera", "Swanand Kadhe", "Farhan Ahmed", "Syed Zawad", "Holger Boche" ]
Fine-tuning large language models (LLMs) on downstream tasks can inadvertently erode their safety alignment, even for benign fine-tuning datasets. We address this challenge by proposing SafeMERGE\footnote{Code available at: \url{https://github.com/aladinD/SafeMERGE}}, a post-fine-tuning framework that preserves safety while maintaining task utility. It achieves this by selectively merging fine-tuned and safety-aligned model layers \emph{only} when those deviate from safe behavior, measured by a cosine similarity criterion. We evaluate SafeMERGE against other fine-tuning- and post–fine-tuning-stage approaches for Llama-2-7B-Chat and Qwen-2-7B-Instruct models on GSM8K and PubMedQA tasks while exploring different merging strategies. We find that SafeMERGE consistently reduces harmful outputs compared to other baselines without significantly sacrificing performance, sometimes even enhancing it. The results suggest that our selective, subspace-guided, and per-layer merging method provides an effective safeguard against the inadvertent loss of safety in fine-tuned LLMs while outperforming simpler post–fine-tuning-stage defenses.
[ "Large Language Models", "Safety", "Safety Alignment", "Fine-Tuning" ]
Accept
https://openreview.net/pdf?id=d8LFGLGMRA
https://openreview.net/forum?id=d8LFGLGMRA
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "yRb5MhXxsh", "o665eJn8I0", "cQlVTJQyxR", "WvXSUHo2ot" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1740856416807, 1740869657574, 1739908537875, 1740750310910 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission107/Reviewer_NYLk" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission107/Reviewer_xW8j" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission107/Reviewer_nn6B" ] ], "structured_content_str": [ "{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review of \\\"SafeMERGE: Preserving Safety Alignment in Fine-Tuned Large Language Models via Selective Layer-Wise Model Merging\\\"\", \"review\": [\"The paper proposes a method to retain safety alignment after fine-tuning an LLM on task-specific data. It achieves this by selectively replacing the fine-tuned LoRA weights of certain layers\\u2014specifically, those that deviate from the safety-aligned subspace\\u2014with linear combinations of the fine-tuned and aligned weights. This results in models that are both less harmful and still highly performant.\", \"**Positive Aspects:**\", \"Effective and practical\\u00a0method for maintaining safety alignment after fine-tuning.\", \"Thorough experimental evaluation.\", \"Comparison to several baselines,\\u00a0with the proposed method mostly outperforming them.\", \"**Negative Aspects:**\", \"Some explanations could be more detailed. In particular, the presentation of the\\u00a0safety-aligned subspace\\u00a0does not seem well-motivated. While I understand that it originates from a previous paper, some additional explanation would be helpful for many readers.\", \"If I understand correctly, the proposed method introduces additional\\u00a0hyperparameters\\u2014the merging factors\\u2014which is tuned. This raises concerns about fairness when comparing to baselines with fewer degrees of freedom. Perhaps the merging factors should be fixed across all models and datasets to ensure a more equitable comparison.\", \"**Additional Comments:**\", \"In many figures (e.g.,\\u00a0Figures 2, 9, 10, and 11), including the scores of the\\u00a0purely aligned\\u00a0and\\u00a0purely fine-tuned models for comparison would make it easier to interpret the results.\", \"DARE merging\\u00a0is mentioned multiple times, but no source is cited for it. Not all readers will be familiar with it, so a citation should be included.\", \"Regarding\\u00a0linear merging, the paper frequently emphasizes that the factors for the fine-tuned and aligned models summing to one is empirically the most successful approach. However, this seems like a rather obvious choice, so I\\u2019m not sure if it warrants such strong emphasis.\"], \"rating\": \"6\", \"confidence\": \"3\"}", "{\"title\": \"The authors propose a novel mechanism and provide justification.\", \"review\": [\"The authors propose a mechanism to update the model layer-wise in a manner that the output of the cosine similarity of the LLM would not deviate from the original one.\", \"The rationale behind the method is clear.\", \"The paper is well-written and easy to follow.\"], \"rating\": \"7\", \"confidence\": \"4\"}", "{\"title\": \"Building on SafeLoRA, the paper claims improved LLM safety alignment, could benefit from expanded benchmarking and better formatting.\", \"review\": [\"# Significance\", \"This paper builds upon SafeLoRA and introduces SafeMERGE, to maintain safety alignment in fine-tuned LLMs.\", \"The key distinction between SafeMERGE and SafeLoRA is that _SafeMERGE merges unsafe layers with safe layers after identifying them via the safety-aligned subspace, whereas SafeLoRA projects unsafe layers onto the safety-aligned subspace_.\", \"Fine-tuning LLMs on downstream tasks can unintentionally degrade their safety alignment, even when using benign datasets. This issue is highly relevant to the workshop community.\", \"The authors claim that SafeMERGE achieves a better trade-off between utility and safety compared to existing baselines.\", \"# Overall Quality and Evaluation\", \"The paper presents comprehensive experimental results using widely used open-weight LLMs (LLaMA-2-7B-Chat and Qwen-2-7B-Instruct) and benchmark datasets (GSM8K and PubMedQA).\", \"The evaluation methodology is appropriate, using:\", \"Exact-match accuracy for GSM8K and classification accuracy for PubMedQA to measure utility.\", \"DirectHarm and HexPhi datasets with Llama-Guard-3-8B to assess model safety.\", \"SafeMERGE is compared against relevant baselines (SafeInstruct, RESTA, SafeLoRA) with well-tuned hyperparameters to ensure fairness.\", \"The paper includes ablation studies to analyse key components of SafeMERGE, such as merging strategies, weighting schemes, and similarity thresholds. The appendices provide additional technical details.\", \"# Suggestions for Improving the Paper\", \"The decision to submit a short 4-page paper while placing critical content (e.g., related work, benchmarking experiments) in the appendix limits the clarity and coherence of the work. Given that the workshop allows a 9-page format, expanding the paper could help present the research in a more structured and complete manner.\", \"Comparing LLaMA-2-7B-Chat on the same benchmark datasets used in the SafeLoRA paper would improve reproducibility and allow for a clearer evaluation of SafeMERGE\\u2019s improvements.\", \"While the paper references the use of harmful prompt\\u2013safe response pairs, a more detailed description of this data (e.g., the nature of harmful prompts and the methodology for creating safe responses) would enhance reproducibility and transparency.\", \"Safety threshold \\u03c4: Since the paper highlights the importance of the safety threshold \\u03c4, a more in-depth discussion on how to determine an appropriate \\u03c4 for different models and tasks would be beneficial.\"], \"rating\": \"6\", \"confidence\": \"3\"}" ] }
bZcbA5QQAX
Achieving Exact Federated Unlearning with Improved Post-Unlearning Performance
[]
Federated learning is a machine learning paradigm that allows multiple clients to train aggregated model via sharing model updates to a central server without sharing their data. Even though the data is not shared, it can indirectly influence the aggregated model via the shared model updates. In many real-life scenarios, we need to completely remove a client's influence (unlearning) from the aggregated model, such as competitive clients who want to remove their influence from the aggregated model (e.g., large language models (LLMs) fine-tuned collaboratively by multiple clients for a specific downstream task) after leaving the coalition to ensure other clients do not benefit from their contributions. The influence removal is also needed when the adversarial client negatively affects the aggregated model. Though the aggregated model can be retrained from scratch to ensure exact unlearning (completely removing the client's influence from the aggregated model), it performs poorly just after the unlearning, which is undesirable during deployment. To overcome this challenge, this paper proposes federated unlearning algorithms that ensure exact unlearning while achieving better performance post-unlearning. The proposed algorithms are problem-agnostic, making them applicable across various domains. Our experimental results further validate the effectiveness of the proposed federated unlearning algorithms in fine-tuning LLMs and performing vision tasks within a federated learning framework using real-world datasets.
[ "Exact Federated Unlearning", "Improved Post-Unlearning Performance", "Multi-Models Training" ]
Reject
https://openreview.net/pdf?id=bZcbA5QQAX
https://openreview.net/forum?id=bZcbA5QQAX
ICLR.cc/2025/Workshop/BuildingTrust
2025
{ "note_id": [ "ykAFxSGrpO", "KwLvkGswH4", "JoYOPzSLtp", "EgAJwm1Xrr" ], "note_type": [ "decision", "official_review", "official_review", "official_review" ], "note_created": [ 1741104165230, 1740791882156, 1740808194773, 1740931937378 ], "note_signatures": [ [ "ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission139/Reviewer_ARt1" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission139/Reviewer_YqNm" ], [ "ICLR.cc/2025/Workshop/BuildingTrust/Submission139/Reviewer_sDoo" ] ], "structured_content_str": [ "{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}", "{\"title\": \"Review\", \"review\": \"The paper approaches an important area of achieving federated learning as a potential threat is that an adversary client can retrieve the private information of other clients from a server-sent aggregated model.\", \"questions\": \"1. The definition of \\\\emph{exact} federated unlearning is confusing. What is the difference between exact and non-exact federated learning? How does the difference affect the preservation of private data?\\n2. From the unlearning part, the newly initialized global model uses the local model of each client which still contains the private information as the local model is trained on the local training data. In the paragraph of unlearning, $C_{t,r}$ is not mentioned except in the definition. Why do we need that? How does it affect the algorithm?\\n3. No theoretical analysis over convergence to guarantee that repeated adopting newly initialized global model won't deviate from the model's performance.\", \"rating\": \"4\", \"confidence\": \"5\"}", "{\"title\": \"Interesting New Algorithms But With Problems\", \"review\": \"BMT method has some theory problems. The main issue is that models get mixed up during training. Even if you remove one client's local model, the main global model still remembers things from all clients. This is because the global model learned from everyone together before. So, even if you forget a local model, the global model still has some of that client's info.\\n\\nAlso, BMT thinks local models are just about one client's data. But in real training, the starting point for local models comes from the global model. And after many rounds of training, local models are not really separate anymore. They are linked together.\\n\\nMMT is a new and interesting idea. But it makes things much more expensive. If you have $n$ clients, you need to train $O(n)$ models, which costs a lot of computer power and communication. This high cost will stop MMT from being used in real situations.\", \"rating\": \"4\", \"confidence\": \"4\"}", "{\"title\": \"Review for Submission 139\", \"review\": \"The paper studies the problem of federated unlearning (FU) in federated learning (FL) systems. In particular, this paper proposes two novel methods to achieve exact federated unlearning while ensuring better performance post-unlearning: (1) Bi-models training; and (2) Multi-model training. This paper further delivers the theoretical analysis and extensive experimental validation, which demonstrates the better performance of the proposed methods.\\n\\nThe major drawback is that the method developed in this paper is not well explained, there lacks a detailed and formal description of the proposed algorithms and it is also not clear how do the developed models compare with other methods in terms of their design principles. Therefore, it would be difficult to fully understand the algorithmic benefits of the proposed algorithms.\", \"rating\": \"6\", \"confidence\": \"3\"}" ] }