forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
aeQeXlG2Pw | LLM Neurosurgeon: Targeted Knowledge Removal in LLMs using Sparse Autoencoders | [
"Kunal Patil",
"Dylan Zhou",
"Yifan Sun",
"Karthik lakshmanan",
"Senthooran Rajamanoharan",
"Arthur Conmy"
] | Generative AI's widespread use has raised concerns about trust, safety, steerability, and interpretability. Existing solutions, like prompt engineering, fine-tuning, and reinforcement learning (e.g., RLHF, DPO), are often hard to iterate, computationally expensive, and rely heavily on dataset quality.
This paper introduces Neurosurgeon, an efficient procedure that uses sparse autoencoders to identify and remove specific topics from a language model’s internal representations. This approach offers precise control over model responses while maintaining overall behavior. Experiments on the Gemma 2-9B model show Neurosurgeon’s ability to reduce bias in targeted areas without altering the model’s core functionality. | [
"LLM",
"large language model",
"sparse autoencoder",
"autoencoder",
"precise",
"safety",
"steering",
"interpretable",
"interpretability",
"generative AI",
"generative",
"AI",
"machine learning"
] | Accept | https://openreview.net/pdf?id=aeQeXlG2Pw | https://openreview.net/forum?id=aeQeXlG2Pw | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"osxkUtwuOt",
"DGLWKuaGJL",
"0gk95QJFeL"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740852203137,
1740856381453,
1740719615343
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission126/Reviewer_iPCc"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission126/Reviewer_RbPG"
]
],
"structured_content_str": [
"{\"title\": \"This paper addresses computational concerns of RHLF and solutions such as prompt engineering that are hard to iterate. They introduce Neurosurgeon a procedure that uses (pre-trained) sparse autoencoders to identify and remove specific topics from a language model's internal representations.\", \"review\": \"Strengths:\\n\\nThis paper discusses addresses an important topic in AI safety. Their approach is creative and provides clear advantages (Flexibility, Precision and Computation Efficiency) against other traditional safety techniques such as RLHF. They show consistent improvement in measured metrics (for eg. Coherence and Compliance) over the baseline. \\n\\nIt would be interesting if they compared to other methods apart from prompting the model to avoid a topic. Are there any RLHF or other adaptation techniques that this method can be compared to?\", \"weaknesses\": \"\", \"i_have_a_few_concerns_about_their_method\": \"(1) how about how generalizable this approach can be? It would be interesting to see results on other models. \\n(2) Also, how expensive is the hyper-parameter tuning process? Do you need to figure out which layer etc is better for each feature? Is it more expensive than what can be done with prompt engineering or RLHF?\\n(3) Can this method be used for implicit biases? For eg scenarios where religion can be mentioned but the LLM responses cannot contain nuanced and harmful portrayals of various religions.\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review of \\\"LLM Neurosurgeon: Targeted Knowledge Removal in LLMs using Sparse Autoencoders\\\"\", \"review\": [\"## **Summary**\", \"This paper proposes a novel method called Neurosurgeon, which uses sparse autoencoders (SAEs) to identify and suppress specific features activated when certain topics are present in the input prompt. The method involves generating synthetic data pairs one containing the target topic and one without using Gemini. These pairs are then fed into the model to calculate the activation frequency score, which helps in discovering the relevant features. By setting these features to zero, this method effectively prevents the model from generating outputs related to the targeted topics.\", \"## **Strengths & Weaknesses**\", \"### **Strengths**\", \"The paper is well-written and well-organized, presenting its methodology and results clearly.\", \"The authors employ two different methods for feature discovery.\", \"The proposed method is efficient as it does not require updating model parameters, reducing computational overhead.\", \"The evaluation covers both the target concept and related concepts, demonstrating that the targeted topic is successfully suppressed while maintaining the general performance of the model.\", \"### **Weaknesses**\", \"While the paper evaluates both the target topic and related topics, a broader set of topics needs to be tested to strengthen the evaluation.\", \"It would be beneficial to test the method on removing harmful concepts and compare the results with other techniques mentioned in the abstract and introduction, such as DPO, RLHF, and unlearning methods.\", \"The paper lacks statistical analysis regarding the number of features that need to be clamped for different concepts, including both broad and specific topics. Such insights would help understand the applicability of proposed method across various domains.\"], \"rating\": \"7\", \"confidence\": \"5\"}"
]
} |
abllmCsDp8 | Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study | [
"Aryan Agrawal",
"Lisa Alazraki",
"Shahin Honarvar",
"Marek Rei"
] | Large Language Models (LLMs) are highly vulnerable to input perturbations, as even a small prompt change may result in a substantially different output. Existing methods to enhance LLM robustness are primarily focused on perturbed data samples, whereas improving resiliency to perturbations of task-level instructions has remained relatively underexplored. In this work, we focus on character- and word-level edits of task-specific instructions, which substantially degrade downstream performance. We experiment with a variety of techniques to enhance the robustness of LLMs, including self-denoising and representation alignment, testing different models (Llama 3 and Flan-T5), datasets (CoLa, QNLI, SST-2) and instructions (both task-oriented and role-oriented). We find that, on average, self-denoising—whether performed by a frozen LLM or a fine-tuned model—achieves substantially higher performance gains than alternative strategies, including more complex baselines such as ensembling and supervised methods. | [
"LLMs",
"robustness"
] | Accept | https://openreview.net/pdf?id=abllmCsDp8 | https://openreview.net/forum?id=abllmCsDp8 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"UsPY1xi6Ms",
"LcsvcdbkpH",
"CDcHU17oPk"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740907466906,
1740807879173,
1740924236276
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission122/Reviewer_UhdV"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission122/Reviewer_ag1J"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"A well-executed position study on how LLMs handle perturbations in task-specific instructions.\", \"review\": \"The paper addresses an important and underexplored problem: how LLMs handle perturbations in task-specific instructions. The empirical study provides insights into various strategies to enhance robustness.\\n\\nThe study is well-designed, testing different perturbation types (character and word level) on multiple datasets (CoLA, QNLI, SST-2) using two prominent models (LLaMA 3 and Flan-T5). In addition, it compares various robustness-enhancing techniques, including self-denoising (SD), perplexity smoothing, instruction ensembling, and representation alignment.\\n\\nWhile the empirical findings are strong, the paper lacks a deeper theoretical justification for why self-denoising works significantly better than other methods. Besides, the paper focuses on classification tasks, but it would be valuable to test whether the findings hold for other NLP tasks such as text generation, translation, or question-answering. Ablation Studies on Self-Denoising are essential, e.g., How does self-denoising compare when using different model sizes or different fine-tuning strategies? Does self-denoising performance degrade with increasing instruction complexity?\\n\\nThis is a well-executed empirical position study with meaningful contributions to LLM robustness. The experimental setup is strong, and the findings are well-supported by the data. Future work could improve the theoretical justification.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Comprehensive Approach to Enhancing LLM Robustness Against Perturbed Instructions\", \"review\": \"This paper investigates methods to enhance the robustness of large language models (LLMs) against perturbed instructions, focusing on character- and word-level edits that degrade downstream performance. The authors experiment with various techniques, including self-denoising, representation alignment, instruction ensembling, and perplexity smoothing, across two LLMs (Llama 3 and Flan-T5), three datasets (CoLA, QNLI, SST-2), and six instruction variants. Their findings indicate that iterative self-denoising (SFT-SDi) outperforms other methods, achieving an average improvement of 59.2% in Performance Drop Rate (PDR).\", \"strengths\": [\"The study addresses a critical issue in the deployment of LLMs: their sensitivity to perturbations in task instructions. This is particularly relevant for real-world applications where input variations are common.\", \"The authors conduct extensive experiments using multiple models, datasets, and perturbation types, ensuring a thorough evaluation of the proposed methods.\", \"The results clearly demonstrate the effectiveness of iterative self-denoising (SFT-SDi) over alternative approaches, providing actionable insights for improving LLM robustness.\", \"The paper highlights the potential of LLMs to self-correct perturbed instructions, which could lead to practical solutions for enhancing model reliability in noisy or adversarial environments.\"], \"weaknesses\": [\"While the paper focuses on character- and word-level perturbations, it does not explore more complex or context-aware perturbations (e.g., semantic paraphrasing or adversarial attacks targeting meaning). Expanding the scope would strengthen the conclusions.\", \"The experiments are conducted on mid-sized LLMs (Llama 3 8B and Flan-T5 Large). It would be valuable to test these methods on larger, state-of-the-art models to assess scalability and generalizability.\", \"The paper does not compare the proposed methods with broader robustness-enhancing techniques, such as adversarial training or data augmentation. Including such comparisons would provide a more comprehensive perspective.\", \"The computational cost of the proposed methods, especially iterative self-denoising, is not discussed. This is important for understanding their feasibility in real-time applications.\", \"The paper makes a valuable contribution by systematically evaluating methods to improve LLM robustness against perturbed instructions. The proposed iterative self-denoising approach demonstrates strong performance improvements, offering a promising direction for future research. However, the study could benefit from addressing the aforementioned limitations, particularly regarding perturbation complexity, computational efficiency, and scalability to larger models. With these refinements, the paper has the potential to significantly advance the field of LLM robustness.\"], \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
aHbc1Uzq0Q | A Generative Approach to LLM Harmfulness Detection with Red Flag Tokens | [
"Sophie Xhonneux",
"David Dobre",
"Mehrnaz Mofakhami",
"Leo Schwinn",
"Gauthier Gidel"
] | Most safety training methods for large-language models (LLMs) based on fine-tuning rely on dramatically changing the output distribution of the model when faced with a harmful request, shifting it from an unsafe answer to a refusal to respond.
These methods inherently compromise model capabilities and might make auto-regressive models vulnerable to attacks that make likely an initial token of affirmative response.
To avoid that, we propose to expand the model's vocabulary with a special token we call a *red flag token* ($\langle RF \rangle$) and propose to fine-tune the model to generate this token at any time harmful content is generated or about to be generated.
This novel safety training method effectively augments LLMs into generative classifiers of harmfulness at all times during the conversation.
This method offers several advantages: it enables the model to explicitly learn the concept of harmfulness while marginally affecting the generated distribution, thus maintaining the model's utility.
It also evaluates each generated answer rather than just the input prompt and provides a stronger defence against sampling-based attacks.
In addition, it simplifies the evaluation of the model's robustness and reduces correlated failures when combined with a classifier.
We further show robustness to long contexts, and supervised fine-tuning attacks. | [
"Large Language Models",
"Adversarial Robustness"
] | Accept | https://openreview.net/pdf?id=aHbc1Uzq0Q | https://openreview.net/forum?id=aHbc1Uzq0Q | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"pye35dYXJ3",
"evT7GChJ0G",
"bUAxerNorG",
"MISwv1ItWN"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740811158730,
1741078799095,
1740900280395,
1740706453567
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission89/Reviewer_8G66"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission89/Reviewer_yFpn"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission89/Reviewer_WSee"
]
],
"structured_content_str": [
"{\"title\": \"Comments\", \"review\": \"Strengths:\\n\\n1. The paper introduces an innovative method for assessing response safety by introducing a red flag token. This approach avoids rejections that could harm the quality and distribution of responses to harmless queries, thereby preserving the model's utility.\\n2. It simplifies the evaluation of the model's robustness and, when combined with a classifier, reduces the likelihood of correlated failures.\\n3. Comprehensive experiments and analyses are provided to support the paper\\u2019s conclusions.\", \"weaknesses\": \"1. While the approach significantly outperforms the baseline on grey-box assumptions, its performance on white-box is worse than the baseline.\\n2. The impact of adding the red flag token on utility might be sensitive to the fine-tuning data used.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"The paper introduces an innovative red flag token (\\u27e8rf\\u27e9) to detect harmful outputs in LLMs, demonstrating robustness against adversarial attacks while maintaining model utility. However, its effectiveness is sensitive to hyperparameters and training data quality, lacks direct comparison with classifier-based methods, and its impact under full fine-tuning remains unexplored.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review of A Generative Approach to LLM Harmfulness Detection with rf Tokens\", \"review\": \"## Summary\\nThe paper introduces a red flag token (\\u27e8rf\\u27e9) for large language models (LLMs) to detect harmful outputs by training the model to generate \\u27e8rf\\u27e9 whenever harmful content is produced or about to be produced, enabling it to act as a generative classifier of harmfulness. Through extensive experiments, the authors demonstrate that this approach maintains model utility, improves robustness against adversarial attacks (e.g., pre-filling and sampling), and can be stored in a LoRA module for post-hoc application to fine-tuned models. \\n\\n## Strengths\\n\\n1. Theoretical and Empirical Validation: \\nProvides a well-designed loss function and extensive experiments across multiple models and attack scenarios\\n\\n2. Robustness to Adversarial Attacks: \\nDemonstrates strong defense against realistic attacks like pre-filling and sampling, as well as generalization to long contexts.\\n\\n## Weaknesses\\n\\n1. Hyperparameter Sensitivity: \\nThe approach requires careful tuning of hyperparameters, which may limit its practicality without extensive resources.\\n2. Dependence on Training Data: \\nThe effectiveness of \\u27e8rf\\u27e9 relies on the quality and diversity of the training data, which may not cover all harmful scenarios.\\n3. Lack of comparison with classifier method: \\nThe paper does not quantitatively compare the \\u27e8rf\\u27e9 method to a standalone harmfulness classifier in terms of performance metrics and trade offs.\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"This work introduces adding special \\\"red flag\\\" token into the model and training the model to prepend generation of unsafe content with said token. They highlight that this method is more robust to black-box and grey-box adversarial attacks.\", \"review\": \"This work is written clearly, and the method introduced in this paper seems original.\\n\\nWhy is the index where <rf> is placed is sampled? Would it be beneficial to decide spans of harmful responses and only their input <rf> token?\\nFigure 2 subplots should be larger to increase their readability and ease of understanding the results.\", \"pros\": [\"Multiple models were evaluated, and multiple benchmarks were utilized to ensure that the model's performance didn't go down.\", \"I think such tokens could be used with other safety-increasing methods, making LLM more robust.\", \"Authors evaluated their method on multiple types of attacks.\"], \"cons\": [\"The part of the dataset with harmful content was very small.\", \"It would be nice to see how this method influences the model performance after full fine-tuning instead of LoRA. Due to LoRA being known for forgetting less information when fine-tuning [1], it could be the case here that when this method was applied with full finetuning, the changes in the MMLU and other datasets could be larger.\", \"[1] Biderman, Dan, et al. \\\"Lora learns less and forgets less.\\\" Transactions on Machine Learning Research (2024).\"], \"rating\": \"7\", \"confidence\": \"4\"}"
]
} |
a2iMmaLG3z | MFC-Bench: Benchmarking Multimodal Fact-Checking with Large Vision-Language Models | [
"Shengkang Wang",
"Hongzhan Lin",
"Ziyang Luo",
"Zhen Ye",
"Guang Chen",
"Jing Ma"
] | Large vision-language models (LVLMs) have significantly improved multimodal reasoning tasks, such as visual question answering and image captioning. These models embed multimodal facts within their parameters, rather than relying on external knowledge bases to store factual information explicitly. However, the content discerned by LVLMs may deviate from factuality due to inherent bias or incorrect inference. In this work, we introduce MFC-Bench, a rigorous and comprehensive benchmark designed to evaluate the factual accuracy of LVLMs across three stages of verdict prediction for multimodal fact-checking (MFC): Manipulation, Out-of-Context, and Veracity Classification. Through our evaluation on MFC-Bench, we benchmarked a dozen diverse and representative LVLMs, uncovering that current models still fall short in MFC and demonstrate insensitivity to various forms of manipulated content. We hope that MFC-Bench could raise attention to the trustworthy AI potentially assisted by LVLMs in the future. | [
"benchmarking",
"evaluation",
"cross-modal application",
"multimodality",
"trustworthy"
] | Accept | https://openreview.net/pdf?id=a2iMmaLG3z | https://openreview.net/forum?id=a2iMmaLG3z | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"SlsHo57UKJ",
"6GwgvjdWsh"
],
"note_type": [
"official_review",
"decision"
],
"note_created": [
1740636456354,
1741109293215
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission25/Reviewer_ftPU"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Introduces a new vision-language benchmark to test for VLMs' ability to detect input manipulations/misalignments.\", \"review\": \"Introduces a new benchmark. The task may not have much ecological validity, and hence hard to understand the importance of it, but it is maybe useful to test for model robustness to certain input manipulations. The human eval for assuring data quality is well done. I also like the evaluation of the justification across the different dimensions. Good thorough analysis.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"MFC-Bench provides a useful tool for evaluating model trustworthiness, particularly in detecting factual inconsistencies in multimodal settings. Further discussion on the real-world applicability of the benchmark would enhance its contribution. The paper is highly relevant for the workshop.\", \"title\": \"Paper Decision\"}"
]
} |
Zlx6AlEoB0 | Antipodal Pairing and Mechanistic Signals in Dense SAE Latents | [
"Alessandro Stolfo",
"Ben Peng Wu",
"Mrinmaya Sachan"
] | Sparse autoencoders (SAEs) are designed to extract interpretable features from language models, yet they often yield frequently activating latents that remain difficult to interpret. It is still an open question whether these \textit{dense} latents are an undesired training artifact or whether they represent fundamentally dense signals in the model's activations. Our study provides evidence for the latter explanation:
dense latents capture fundamental signals which (1) align with principal directions of variance in the model's residual stream and (2) reconstruct a subspace of the unembedding matrix that was linked by previous work to internal model computation.
Furthermore, we show that these latents typically emerge as nearly antipodal pairs that collaboratively reconstruct specific residual stream directions. These findings reveal a mechanistic role for dense latents in language model behavior and suggest avenues for refining SAE training strategies. | [
"interpretability",
"mechanistic interpretability",
"SAE",
"sparse autoencoder"
] | Accept | https://openreview.net/pdf?id=Zlx6AlEoB0 | https://openreview.net/forum?id=Zlx6AlEoB0 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"rxqSowHW1E",
"ny8GN2WDIv",
"gDn3WlTdmq",
"0O2D90HEkI"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1740924410812,
1740904843511,
1740891600945,
1740909178614
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission63/Reviewer_Wvcc"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission63/Reviewer_XpvX"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission63/Reviewer_4MVj"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review of paper: Antipodal Pairing and Mechanistic Signals in Dense SAE Latents\", \"review\": [\"The paper investigates the role of dense latents in Sparse Autoencoders (SAEs). The authors show that dense latents capture fundamental signals. They also show that most dense latents are arranged in antipodal pairs.\", \"### Pros\", \"The paper provides a novel explanation for what information dense latents in SAEs contain, specifically for language models.\", \"There are detailed empirical results that support the authors' claims.\", \"### Cons\", \"The paper is a bit hard to follow due to limited introduction on the background. It would be helpful to add some background information on dense latents, residual streams, etc.\", \"Lack of theoretical analyses. Consider adding more theoretical analyses to support the empirical results.\", \"Clarity in Mathematical Presentation \\u2013 Some formulae and derivations, including Equation (1), could be explained more clearly for readers unfamiliar with SAE training dynamics.\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"Review of \\\"Antipodal Pairing and Mechanistic Signals in Dense SAE Latents\\\"\", \"review\": [\"## **Summary**\", \"This paper analyzes the dense latents of Sparse Autoencoders (SAEs). It studies the relationship between dense SAE latents and the language model's residual stream, revealing that SAE dense latents learn important signals to reconstruct directions in the residual stream. More precisely, it analyzes: 1) The alignment of dense latents with the top principal components of the residual stream. 2) The alignment of dense latents with the dark subspace of the unembedding matrix. 3) The connection between latent density and antipodal pairing.\", \"## **Strengths & Weaknesses**\", \"### **Strengths**\", \"This paper is well-written and presents a novel perspective in its analysis of dense latents in SAEs.\", \"The study examines dense latents from three different aspects, supported by sufficient experiments and demonstrations.\", \"The finding that dense latents are learned in a way that reconstructs key directions in the residual stream is particularly insightful.\", \"### **Weaknesses**\", \"The main weakness of this study is that it does not provide an explanation for certain observed behaviors, which could be useful for exploring alternative solutions.\", \"Although the paper states that most latents are not activated frequently, Figure 4 (for GPT-2) shows that a larger number of latents have higher $\\\\rho$ values, and a few latents have high pairwise scores and there is lack of explanation for that.\", \"Additionally, there are some minor typographical errors, such as:\", \"In line 80, both terms are written as $W_{\\\\text{enc}}$ instead of distinguishing between the encoder and decoder weights.\", \"In line 198, a \\\".\\\" (period) is missing.\"], \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Though this paper presents an interesting study of dense latents and their alignment with key model subspaces, it lacks a strong theoretical foundation and does not establish clear causal relationships.\", \"review\": \"The study explores the role of dense latents in SAEs, a topic relevant to interpretability research in LLMs. The authors conduct detailed evaluations across different language models (Gemma 2, GPT-2, LLaMA 3.1). The use of principal component analysis (PCA), singular value decomposition (SVD), and SHAP analysis adds rigor to the study.\\n\\nHowever, the authors claim that dense latents serve a \\\"mechanistic role\\\" in the model\\u2019s residual stream but provide little theoretical backing for why this phenomenon occurs. There is no discussion on whether these findings generalize across different training paradigms or model architectures. In addition, while the correlation between dense latents and residual stream variance is well-documented, correlation does not imply causation. The paper does not perform intervention-based experiments (e.g., ablating dense latents and observing performance degradation). It remains unclear if these latents **cause** important computations or merely reflect existing model dynamics.\\n\\nSome experimental Designs should be clarified. For instance, the definition of \\\"dense latents\\\" is somewhat arbitrary (activating on 10%+ tokens). The threshold should be better justified. The synthetic setup lacks clarity: Are SAEs trained with different sparsity constraints to observe controlled effects? The role of antipodal latents is discussed qualitatively but lacks precise mathematical formulation.\\n\\nMoreover, the paper does not compare SAEs with **alternative feature-extraction methods** (e.g., attention attribution, dictionary learning). Without such baselines, it is difficult to assess whether SAEs are uniquely capturing useful signals.\", \"rating\": \"5\", \"confidence\": \"4\"}"
]
} |
ZkNyX9M191 | FiDeLiS: Faithful Reasoning in Large Language Models for Knowledge Graph Question Answering | [
"Yuan Sui",
"Yufei He",
"Nian Liu",
"Xiaoxin He",
"Kun Wang",
"Bryan Hooi"
] | Large language models (LLMs) are often challenged by generating erroneous or hallucinated responses, especially in complex reasoning tasks. Leveraging knowledge graphs (KGs) as external knowledge sources has emerged as a viable solution. However, existing KG-enhanced methods, either retrieval-based or agent-based, encounter difficulties in accurately retrieving knowledge and efficiently traversing KGs at scale. In this paper, we propose a unified framework, FiDeLiS, designed to improve the factuality of LLM responses by anchoring answers to verifiable reasoning steps retrieved from a KG. To achieve this, we leverage step-wise beam search with a deductive scoring function, allowing the LLM to validate each reasoning step and halt the search once the question is deducible. In addition, our Path-rag module pre-selects a smaller candidate set for each beam search step, reducing computational costs by narrowing the search space. Extensive experiments show that our training-free and efficient approach outperforms strong baselines, enhancing both factuality and interpretability. | [
"Large Language Model Hallucination",
"Faithful Reasoning",
"Knowledge-enhanced LLMs",
"Knowledge Graphs"
] | Accept | https://openreview.net/pdf?id=ZkNyX9M191 | https://openreview.net/forum?id=ZkNyX9M191 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"sUgpR6iaAA",
"WueDtkZmDw",
"RB0sQsHHKC",
"Om1o2fwPOE"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740916670635,
1740904498642,
1740940058336,
1741104252453
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission148/Reviewer_UJ1d"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission148/Reviewer_MKR3"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission148/Reviewer_J21u"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"FiDeLiS addresses a critical challenge in LLM reasoning\\u2014namely, the prevalence of hallucinated or factually incorrect outputs\\u2014by leveraging external structured knowledge from KGs. While the paper is motivated by an important problem and proposes an intriguing two-pronged approach, it suffers from several significant shortcomings that undermine its overall contribution.\", \"review\": \"Strengths:\\n\\n1.Integrated Framework:\\n By combining a retrieval-augmented module (Path-RAG) with a deductive verification beam search (DVBS), the method aims to ensure that every reasoning step is both semantically relevant and logically valid.\\n\\n2.Empirical Evaluation:\\n Extensive experiments on KGQA benchmarks (such as WebQSP, CWQ, and others) demonstrate that FiDeLiS can outperform several strong baselines, including methods like ToG and RoG.\", \"weaknesses\": \"1.Computational Inefficiency:\\n Despite claims of efficiency gains via candidate pre-selection, the heavy reliance on step-wise beam search and LLM interactions results in high latency. The trade-off between computational cost and improved reasoning fidelity is not sufficiently justified.\\n\\n2.Questionable Novelty:\\n FiDeLiS largely builds on existing retrieval-augmented frameworks and deductive verification techniques. The integration of these components, though non-trivial, does not represent a significant leap in innovation.\\n\\n3.Reliability Issues:\\n Error analysis reveals that a substantial portion of the generated reasoning steps (only around 67% validity) are either formatted incorrectly or reference non-existent KG facts. This undermines the paper\\u2019s central claim of achieving faithful reasoning.\\n\\n4.Organization and clarity:\\n Critical details\\u2014such as the rationale behind key design choices and hyperparameter settings\\u2014are not clearly articulated\", \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"title\": \"Well-written paper with overall and stepwise evaluations\", \"review\": \"**Summary**\\nThis paper is addressing the hallucination problem of LLM reasoning by retrieving relative knowledge from knowledge graphs (KG). The paper proposed two modules: PATH-REG and DVBS, which aims to retrieve the relevant and accurate information from the KG efficiently. The experiments verifies that their proposed method can enhance the reasoning ability of LLMs, and the ablation studies verifies each of their design choice. \\n\\n**Strengths** \\n1. The paper is well-written, very clear, and easy to understand and follow the ideas. \\n2. The experiments evaluates the model on the both the final and stepwise reasoning performance, which confirms advantages of the proposed method.\\n\\n**Weakness**\\n1. In Line 103, in the relationship Justin Bieber ---> Jeremy Bieber --> Erin Wagner, is the subject of two relationships are Jeremy Bieber? If not, should the second relationship be ex-husband? I am not familiar with KGs, so I am not very sure about this. \\n2. In Section 3.1 Line 199-200, the paper computing the score $S_0((r_j, e_j))$ by only examining entities within one-hop. Is it possible to extend this to multi-hop, since my intuition is that probably multi-hop ($H\\\\geq2$) can improve the accuracy of the retrieval, but is less efficient which requires more computational resources. So I suggest authors add more discussion about this. \\n3. How is the performance of using LLMs to do the deductive verification? Is there any potential limitations using this LLM-as-a-Judge methodology, given the fact it has position bias, length bias, etc. \\n\\n\\n**Sidenote** \\n1. I suggest authors read about the PLASMA [1] paper, which uses the similar idea of verification-based beam search as this paper but on planning tasks. \\n[1] Brahman, Faeze, et al. \\\"Plasma: Making small language models better procedural knowledge models for (counterfactual) planning.\\\" arXiv preprint arXiv:2305.19472 (2023).\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Good Paper for Ensuring Fidelity of Knowledge-Based Reasoning\", \"review\": \"This paper tackles the important problem of ensuring faithfulness to retrieved knowledge graph entries in multi-hop reasoning tasks. They identify two key challenges faced by such methods. Firstly, retrieval systems may be insufficient to collect all relevant knowledge graph edges simply given the initial query. Secondly, the generated reasoning chains may have hallucinations, fail to terminate, or terminate inappropriately early. Both failure modes introduced are interesting and represent important problems for the deployment of knowledge-graph based reasoning in LLMs. Then, they propose two methods for improving these issues (1) a Path-Rag system which takes into account the structural connectivity of knowledge graph and (2) a deductive beam search strategy which efficiently explores reasoning chains and ensures they obey local and global coherence constraints. They show the superior performance of this method relative to standard prompting, chain of thought, and RAG. Overall, I like this paper. The proposed modifications are well-motivated and justified by the provided ablations. Moreover, this paper is generally well-written and easy to follow. In terms of questions, I wonder why the authors chose to not validate any of the new test-time reasoning models with and without their framework (i.e. o1 and r1). I think the models evaluated currently are a bit behind the state-of-the-art. Moreover, I was slightly confused about whether the \\\"retrieval\\\" is run only once in the beginning or to generate candidate relations after every step of reasoning. In the first case, the evaluation of different relations could still suffer from being too \\\"local\\\" to the initial query, whereas the second could result in some efficiency issues. It would be quite interesting if the authors could discuss this more explicitly. Lastly, I wonder how the number of retrieved knowledge graph entries (denote by M) should be set. Could it vary based on the query? Overall, however, I think this is an interesting and valuable contribution and would like it to be accepted.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
Z9qzta1yiK | Towards Understanding Fine-Tuning Mechanisms of LLMs via Circuit Analysis | [
"Xu Wang",
"Yan Hu",
"Wenyu Du",
"Reynold Cheng",
"Benyou Wang",
"Difan Zou"
] | Fine-tuning significantly improves the performance of Large Language Models (LLMs), yet its underlying mechanisms remain poorly understood. This paper aims to provide an in-depth interpretation of the fine-tuning process through circuit analysis, a popular tool in Mechanistic Interpretability (MI). Unlike previous studies (Prakash et al. 2024, Chhabra et al. 2024) that focus on tasks where pre-trained models already perform well, we develop a set of mathematical tasks where fine-tuning yields substantial performance gains, bringing the setup closer to real-world scenarios. In our experiments, we identify circuits at various checkpoints during fine-tuning and examine the interplay between circuit analysis, fine-tuning methods, and task complexities. First, we find that while circuits maintain high node similarity before and after fine-tuning, their edges undergo significant changes, contrasting with previous work (Prakash et al. 2024, Chhabra et al. 2024) that reported only small circuit additions after fine-tuning. Based on these observations, we develop a circuit-aware Low-Rank Adaptation (LoRA) method that assigns ranks to layers according to edge changes in the circuits. Experimental results demonstrate that our circuit-based LoRA achieves an average improvement of $2.46\%$ over standard LoRA with comparable parameter sizes. Furthermore, we explore how combining circuits from subtasks can enhance fine-tuning in compositional tasks, offering new insights into task design and deepening our understanding of circuit dynamics and fine-tuning mechanisms. | [
"Circuit Analysis",
"Fine-Tuning",
"Mechanistic Interpretability"
] | Accept | https://openreview.net/pdf?id=Z9qzta1yiK | https://openreview.net/forum?id=Z9qzta1yiK | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"SVxGpLEDP8",
"RQfGukGGHV",
"BB6PGaxaFT"
],
"note_type": [
"decision",
"official_review",
"official_review"
],
"note_created": [
1741054621055,
1740852461896,
1740447957651
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission26/Reviewer_6KUR"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission26/Reviewer_8mqt"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"A well-written paper investigating exciting topics in mechanistic interpretability, with practical utility at the same time\", \"review\": [\"Strengths:\", \"This paper focused on illuminating the LLM fine-tuning process from a mechanistic interpretability angle, which is novel.\", \"By conducting analyses on 3 LLMs, and on 5 challenging math datasets, the authors are able to conclude some general behaviors of the extracted circuits during the course of fine-tuning, which gives valuable insights to understand the fine-tuning process.\", \"What\\u2019s more exciting is that the authors are able to propose circuit-aware fine-tuning heuristics to provide actionable insights on top of those interpretations, and they are also able to provide early evidence of the effectiveness of such methods.\", \"At the end the authors also try to address a question that people in the mechanistic interpretability field had long been interested in solving \\u2013 can we combine circuits from sub-tasks to solve for a combination task, and they also provide some early evidence on the possibility.\", \"The paper is well written.\"], \"weaknesses\": [\"N/A.\"], \"rating\": \"9\", \"confidence\": \"4\"}",
"{\"title\": \"A relevant study on LLM fine-tuning interpretability using circuit analysis\", \"review\": [\"This paper explores the mechanisms behind fine-tuning LLMs by treating them as computational graphs and examining the circuits within these graphs. The authors focus on tasks where fine-tuning leads to significant performance improvements, using mathematical tasks on which pre-trained LLMs initially perform poorly. They identify and analyze circuits within these models and develop a circuit-aware LoRA method which allocates higher ranks to layers with more edge changes. This method improves fine-tuning efficiency and performance across various mathematical tasks.\", \"**Clarity:** The paper is generally well-written and effectively communicates its key findings and their implications. However, some parts of the paper could benefit from further clarification, particularly regarding the limitations of circuit analysis and the generalizability of the findings.\", \"**Strengths**:\", \"The paper addresses a significant gap in our understanding of fine-tuning by focusing on tasks where performance gains are substantial, providing valuable insights into the underlying mechanisms.\", \"The use of circuit analysis within the framework of Mechanistic Interpretability provides a robust and insightful approach to examining the changes induced by fine-tuning.\", \"The discovery that fine-tuning primarily modifies circuit edges rather than adding new components is a significant contribution to the field.\", \"The development of circuit-aware LoRA, which leverages the identified edge modifications, demonstrates the practical value of these findings.\", \"**Weaknesses**:\", \"The study focuses on a specific set of mathematical tasks, and further research is needed to determine the generalizability of these findings.\", \"Circuit analysis, while insightful, has limitations in terms of its interpretability and scalability.\", \"The circuit analysis methods used in this study can be computationally expensive.\", \"**Areas of Improvements:**\", \"Expand the scope of the study to include a wider range of tasks and model architectures.\", \"Provide a more detailed discussion of the computational costs associated with circuit analysis.\", \"Further clarify the limitations of circuit analysis and the generalizability of the findings.\", \"Overall, paper makes a significant contribution to our understanding of fine-tuning mechanisms and has the potential to impact future research in this area. The authors should address the identified weaknesses, particularly by discussing the limitations of circuit analysis and the generalizability of their findings.\"], \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
YeYT9px8DL | ORTHOGONAL SAE: FEATURE DISENTANGLEMENT THROUGH COMPETITION-AWARE ORTHOGONALITY CONSTRAINTS | [] | Understanding the internal representations of large language models is crucial for ensuring their reliability and enabling targeted interventions, with sparse autoencoders (SAEs) emerging as a promising approach for decomposing neural activations into interpretable features. A key challenge in SAE development is feature absorption, where features stop firing independently and are ``absorbed'' into each other to minimize $L_1$ penalty. We address this through Orthogonal SAE, which introduces sparsity-guided orthogonality constraints that dynamically identify and disentangle competing features through a principled three-phase curriculum. Our approach achieves state-of-the-art results on the Gemma-2-2B language model for feature absorption while maintaining strong reconstruction quality and model preservation on downstream tasks. These results demonstrate that orthogonality constraints and competition-aware training can effectively balance the competing objectives of feature interpretability and model fidelity, enabling more reliable analysis of neural network representations. | [
"Feature Disentanglement",
"Debiasing in Neural Networks",
"Dangerous Knowledge Filtering",
"Ethical AI",
"Sparse Autoencoders",
"Model Safety",
"Bias Reduction Techniques"
] | Reject | https://openreview.net/pdf?id=YeYT9px8DL | https://openreview.net/forum?id=YeYT9px8DL | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"greyNeSSXE",
"gEZsCQvqwg",
"KhbDDvoRw1",
"CdaCnssvQj"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1739981690559,
1739691766193,
1741099578433,
1739848131214
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission138/Reviewer_tPm8"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission138/Reviewer_cwJT"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission138/Reviewer_fXn4"
]
],
"structured_content_str": [
"{\"title\": \"Solid Scientific Idea and Execution; Paper Requires Work Before Acceptance\", \"review\": \"Feature absorption is one of the major problems facing current Sparse Autoencoders (SAEs), and this paper presents an intuitive way to counter absorption by introducing a similarity penalty. Specifically, during training the cosine similarity between all pairs of features is calculated and added as a regularization penalty.\\n\\nWhile the results regarding absorption are promising, there are clear problems with the paper that block it from being accepted at this time.\\n\\nStrengths\\n- Feature absorption is a real problem in SAEs, and this is an intuitive way to solve it. There are grounded mathematical underpinnings.\\n- The MSE and absorption results are SoTA with the three SAE architectures (JumpReLU, BatchTopK, and Standard ReLU)\\n\\nWeaknesses\\n- While both Sparse Probing and Cross Entropy are discussed for nearly half a page as useful metrics to evaluate Orthogonal SAEs (and I agree that they are useful), as far as I can see neither of the results for these experiments are provided. Tables 1/2 show Absorption, MSE, and KL Div, which is striking because MSE is not discussed earlier in the paper as a useful metric.\\n- The main advantage of Orthogonal SAEs that the authors advocate for is an improvement in feature absorption, but as far as I can tell how absorption is calculated in Table 1/2 is not described in the paper\\n- There is no discussion about the compute required relative to standard architectures. Intuitively, with a $k = 16, 384$ encoder, the cosine similarity matrix will be of size $k^2 \\\\approx 270M$ entries. This presents in the last term of Eq. 4. It would be ideal to have a pareto curve of compute and the target metric (likely absorption) to have a robust discussion about how compute efficient this architecture is. This is especially important as for models like Gemma-2-9B, SAEs encoders can have up to $k = 1M$, and an $O(n^2)$ compute cost might not be practical.\\n- This is a more minor point, but the three phase training curriculum is unintuitive to me. Specifically, there seem to be magic numbers in the training (see $\\\\alpha(t)$ in step 2, the number of training steps in each phase). How did these numbers come to be? How does it change as the architecture scales? What recommendations do you have for future practitioners trying to design orthogonal SAEs? All of these would be useful questions to discuss. I also have a personal question: Why do we first train on only reconstruction with no sparsity penalty? As far as I'm aware, this is not done in the literature, likely because with an overcomplete basis you can just learn an identity mapping from the activations\\n- Another minor point, but ideally you could compare to more recent architectures, like Matryoshka SAEs, which are also designed to help with absorption. SAEBench [1] should have a simple interface to load Orthogonal SAEs in to compare against standard architectures. This is not critical to paper acceptance, as I believe you sufficiently benchmarked your model, it could be nice to showcase the robustness of your results.\\n- Lastly, the paper repeatedly refers to TopK SAEs, but they are BatchTopK SAEs. TopK SAEs refers to [2]\\n\\nOverall this paper can still reach acceptance if it addresses the weaknesses listed. Most of these likely do not require additional experiments to be run. The paper's presentation must be revised.\\n\\n[1] Karvonen, A., Rager, C., Lin, J., Tigges, C., Bloom, J.,\\nChanin, D., Lau, Y.-T., Farrell, E., Conmy, A., Mc-\\nDougall, C., Ayonrinde, K., Wearden, M., Marks, S., and\\nNanda, N. Saebench: A comprehensive benchmark for\\nsparse autoencoders, December 2024b. URL https://\\nwww.neuronpedia.org/sae-bench/info. Ac-\", \"cessed\": \"2025-01-20.\\n\\n\\n[2] Gao, L., la Tour, T. D., Tillman, H., Goh, G., Troll, R.,\\nRadford, A., Sutskever, I., Leike, J., and Wu, J. Scaling\\nand evaluating sparse autoencoders, 2024. URL https:\\n//arxiv.org/abs/2406.04093.\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"title\": \"Official Review of Submission138\", \"review\": [\"# Strengths\", \"**Novel Approach**: The paper introduces a novel approach (Orthogonal SAE) to address the feature absorption problem in sparse autoencoders, a significant issue in interpretability. The use of competition-guided orthogonality constraints combined with a three-phase curriculum is a novel and technically sound idea.\", \"**Ablation Studies**: Ablation studies are sound.\", \"**Well-Structured and Written**: The paper is generally well-structured and clearly written, though short, making it easy to follow.\", \"# Weaknesses\", \"**Limited Evaluation Scope**: While the evaluation on Gemma-2-2B is valuable, the paper's claims of \\\"state-of-the-art results\\\" would be significantly strengthened by evaluation on a wider range of model sizes and architectures. The authors acknowledge this limitation due to computational constraints, but it remains a weakness.\", \"**Limited Evaluation Metrics**: Authors excluded several key metrics [1] for evaluating SAEs (eg Explained Variance, L0 sparsity, Automated Interpretability, etc.).\", \"**Hyperparameter Sensitivity**: While the paper details the training curriculum, it would be beneficial to include a sensitivity analysis of key hyperparameters (\\u03bbs, \\u03bbo, \\u03b8) to understand the robustness of the approach.\", \"**Explainability of KL Divergence Result**: The KL divergence results are worse, which hurt the claims of preserving the model performance, and need more thorough explanation and investigation.\", \"**Theoretical Justification**: While the intuition behind the approach is clear, a more formal theoretical justification for the competition-aware orthogonality constraints and their impact on feature disentanglement could strengthen the paper.\", \"**Computational Cost**: It will be good to discuss the computational cost of the approach.\", \"[1] https://www.neuronpedia.org/sae-bench/info\"], \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Novel approach that reduces feature absorption in sparse autoencoders via curriculum learning and feature competition. The results are encouraging but vital information is missing and presentation could be better.\", \"review\": [\"The authors improve the interpretability of SAEs by tackling the problem of feature absorption where the model optimizes for the L1 norm at the cost of easily understood features. They tackle this problem using two novel and simple techniques which draw inspiration from other fields of machine learning. The first technique is the introduction of a curriculum which gradually introduces the various loss terms of the SAE over predetermined time intervals. The second technique introduces a third loss to the SAE based on feature competition which attempts to prevent the model from using feature pairs that co-activate on similar inputs. Using these techniques, the authors demonstrate the feature absorption on Gemma 2 2B reduces significantly while maintaining model quality. Additionally, they establish the importance of the proposed techniques through an ablation study which is currently difficult to parse.\", \"The biggest problem with the proposed approach is the large number of hyper-parameters with no explanation of how to arrive at the selected values in the paper. Despite these shortcomings, the paper represents a small step in making models more interpretable by tackling a core problem of SAEs and will be useful to the mechanistic interpretability community after its presentation is improved.\", \"Strengths\", \"Introduces a new method for training SAEs using curriculum learning and a competition-aware mechanism using an orthogonality matrix.\", \"Attains state-of-the-art performance in attaining low feature absorption while maintaining similar model quality as evidenced by other metrics.\", \"Weaknesses\", \"(major) Uses a large number of hyper-parameters like the number of steps for each curriculum, coefficients for each loss and coefficients for the dynamic threshold. There is no explanation of how the authors arrived at the reported values which will make applying the architecture to LLMs other than Gemma 2 2B difficult.\", \"I find it a bit difficult to parse Table 2 and understand how the ablation study was performed. The authors claim that curriculum learning had the most impact but the table demonstrates that using a fixed threshold yields the lowest absorption score (apart from the full model).\", \"Some errors in how the results are reported: Cross entropy loss score is defined but is not reported and the paper conflicts itself on whether KL divergence is better when the value is low or high (line 202 and 235, lower is better should be the true claim).\", \"There is no mention of the limitations of the approach (with the primary one being the large number of hyper-parameters).\", \"(minor) The Gemma model is cited as Team et al which is not descriptive, it would be better to use \\\"Gemma Team\\\" as mentioned in the model technical report.\", \"(minor) In the discussion (line 267), the authors claim that their work would be useful for sparse attention mechanisms but I do not see why this would be the case. The SAEs do not use attention mechanisms and the underlying LLM is already pre-trained using regular attention. I suggest that the authors rephrase the text to make what they are trying to say more clear.\", \"(very minor) Perhaps a citation for curriculum learning should be added?\"], \"rating\": \"5\", \"confidence\": \"3\"}"
]
} |
Y9ClTAjAzw | Toward Trustworthy Difficulty Assessments: Large Language Models as Judges in Programming and Synthetic Tasks | [] | Large Language Models (LLMs) have demonstrated remarkable capabilities in natural language processing but face challenges in structured tasks such as predicting the difficulty of competitive programming problems. We compare GPT-4o against an interpretable LightGBM ensemble on a dataset of 1,825 LeetCode problems labeled Easy, Medium, or Hard. Our experiments reveal that GPT-4o achieves only 37.75\% accuracy, significantly below the 86\% achieved by LightGBM. Detailed analyses, including confusion matrices and SHAP-based interpretability, highlight that numeric constraints play a crucial role in classifying harder problems. By contrast, GPT-4o often overlooks such details and exhibits a bias toward simpler categories. Additionally, we investigate GPT-4o's performance in generating and classifying synthetic Hard problems. Surprisingly, GPT-4o labels almost all synthetic Hard problems as Medium, contradicting its behavior on real Hard problems. These findings have implications for automated difficulty assessment, educational platforms, and reinforcement learning pipelines reliant on LLM-based evaluations. | [
"Large Language Models (LLMs)",
"Competitive Programming Difficulty",
"GPT-4o",
"Interpretable Machine Learning",
"Interpretable Model",
"Synthetic Problem Generation",
"Numeric Constraints",
"Automated Assessment"
] | Reject | https://openreview.net/pdf?id=Y9ClTAjAzw | https://openreview.net/forum?id=Y9ClTAjAzw | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"VuszH3s42w",
"HuRudhDFI1",
"2guDyw23SQ",
"2eBVP5SwQA"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740453754909,
1740703818388,
1740908139472,
1740855904756
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission7/Reviewer_c9up"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission7/Reviewer_XSwV"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission7/Reviewer_eom7"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Lacking of Novel Contribution\", \"review\": \"This workshop paper lacks originality and significance of work, as LLM reasoning abilities, especially compared to tabular models on tabular tasks, has been decently studied - this work does not provide any novel contribution.\", \"pros\": [\"this paper is generally pretty clear in it's observation of the phenomenon above\"], \"cons\": [\"mainly, this paper lacks novelty/quality/significance of work (and any contribution other than a highlight of a LLM shortcoming)\", \"plots lack substance\", \"synthetic hard problem generation (3.3) is not evaluated against LightGBM\", \"no extensive literature review/citations\"], \"rating\": \"2\", \"confidence\": \"4\"}",
"{\"title\": \"Review of \\\"Toward Trustworthy Difficulty Assessments: Large Language Models as Judges in Programming and Synthetic Tasks\\\"\", \"review\": \"## **Summary**\\n\\nThis paper compares the performance of GPT-4o and LightGBM ensemble model in predicting the difficulty level of coding questions on LeetCode. The dataset consists of 1,825 problems categorized as Easy, Medium, or Hard. The study finds that GPT-4o achieves around 38% accuracy, which is significantly lower than LightGBM\\u2019s 86% accuracy. The paper suggests that the primary reason for GPT-4o's underperformance is its tendency to overlook numerical details, such as time constraints and memory constraints, which are essential in determining the difficulty of programming tasks. The study highlights that GPT-4o relies heavily on textual descriptions, leading to misclassifications, whereas LightGBM uses numeric features effectively to distinguish between different difficulty levels.\\n\\n## **Strengths & Weaknesses**\\n\\n**Strengths**\\nThe problem this paper aims to address is important, as it highlights that GPT-4o overlooks specific kinds of details and exhibits bias toward certain categories. However, the experiments conducted are not sufficient to conclusively support this claim.\\n\\n**Weaknesses**\\nDespite addressing a relevant problem, the methodology for evaluating GPT-4o is not sufficiently explained, making it difficult to fully understand the experimental setup. The study does not explore few-shot learning or advanced prompting techniques, which could potentially enhance GPT-4o's performance. Additionally, it is not fair to assess GPT-4o without clearly defining the factors used for labeling, as these are not mentioned in the paper. The related work section is also limited, lacking a comprehensive review of existing studies on difficulty assessment using LLMs. The conclusions drawn are not fully supported due to the limited scope of experiments.\", \"rating\": \"2\", \"confidence\": \"5\"}",
"{\"title\": \"incomplete position work on whether LLM can be a fair Judge on programming difficulties.\", \"review\": \"The paper tackles an interesting and practical issue: whether LLMs, particularly GPT-4o, can accurately assess difficulty in programming problems. The comparison between GPT-4o and a structured machine learning model (LightGBM) provides a useful benchmark.\\n\\nHowever, this paper does not explore strong mitigation strategies (e.g., prompt engineering, fine-tuning) to improve GPT-4o\\u2019s accuracy, particularly prompt plays a key role in LLMs' performance. Instead of simply concluding that LLMs underperform, the paper should investigate ways to make them more competitive. \\n\\nIn addition, while the empirical results are strong, there is little theoretical explanation for *why* GPT-4o struggles so much. The discussion is largely descriptive rather than analytical; it states that GPT-4o is \\\"biased toward simpler categories\\\" but does not explain why in terms of model behavior or training biases.\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
XWwta75eDs | Mind the Gap: A Practical Attack on GGUF Quantization | [
"Kazuki Egashira",
"Robin Staab",
"Mark Vero",
"Jingxuan He",
"Martin Vechev"
] | With the increasing size of frontier LLMs, post-training quantization has become the standard for memory-efficient deployment. Recent work has shown that basic rounding-based quantization schemes pose security risks, as they can be exploited to inject malicious behaviors into quantized models that remain hidden in full precision. However, existing attacks cannot be applied to more complex quantization methods, such as the GGUF family used in the popular ollama and llama.cpp frameworks. In this work, we address this gap by introducing the first attack on GGUF. Our key insight is that the quantization error -- the difference between the full-precision weights and their (de-)quantized version -- provides sufficient flexibility to construct malicious quantized models that appear benign in full precision. Leveraging this, we develop an attack that trains the target malicious LLM while constraining its weights based on quantization errors. We demonstrate the effectiveness of our attack on three popular LLMs across nine GGUF quantization data types on three diverse attack scenarios: insecure code generation ($\Delta$=$88.7\%$), targeted content injection ($\Delta$=$85.0\%$), and benign instruction refusal ($\Delta$=$30.1\%$). Our attack highlights that (1) the most widely used post-training quantization method is susceptible to adversarial interferences, and (2) the complexity of quantization schemes alone is insufficient as a defense. | [
"quantization",
"large language models",
"security",
"poisoning",
"gguf"
] | Accept | https://openreview.net/pdf?id=XWwta75eDs | https://openreview.net/forum?id=XWwta75eDs | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"uvun2XOSPJ",
"dgZ96pzhWS",
"W6NJsvmwhx",
"I9R6CvBKz6"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740868555719,
1739846724466,
1740497243249,
1741075467447
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission12/Reviewer_VnWF"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission12/Reviewer_jSqV"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission12/Reviewer_uxFD"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Analysis of Novel GGUF Quantization Attacks: Effective Across Models but Variable in Performance by Attack Type\", \"review\": \"## Summary\\nThis paper introduces the first attack on GGUF quantization, which is used to deploy LLMs on consumer hardware. The authors show that their modifications create malicious models that appear benign in full precision but exhibit harmful behaviors when quantized using GGUF. This could be a problem for popular frameworks like llama.cpp and ollama, which use quantised models. By exploiting the quantization error between full-precision weights and the quantized versions, they develop an \\\"error-based interval\\\" attack that works across multiple quantization types simultaneously. Their method is effective across three popular LLMs, multiple GGUF quantization data types, and three attack scenarios \\u2013 insecure code generation, content injection, and instruction refusal. The authors show, in essence, that optimisation-based quantization methods are not immune to adversarial attacks.\\n\\n## Claimed Novel Contributions\\n- A error-based interval estimation method that enables attacks on optimization-based GGUF k-quant quantization data types\\n- Evidence showing the attack consistently produces stealthy and effective quantization exploits across models, k-quant types, and attack scenarios\\n- Analysis exploring key attack design choices, heuristics, interval sizes, and existing defenses\\n\\n## Strengths\\nClearly a novel technical contribution is being made with this new method to attack, whereas the previous research targeted simpler zero-shot quantisation methods (as outlined in the paper). This has clear implications for practical LM deployment scenarios, which is timely and should be looked at. GGUF is very widely used, and by targeting this the authors share a vulnerability that affects many real-world deployments.\\n\\nThe experiments seem very thorough, and are validated across 3 separate models, which is a very time-consuming exercise within itself - the authors also evaluated across many quantisation data types and separate scenarios, with clear results showing that the attack is effective in vulnerable code generation, content injection and benign instruction refusal. The use of ablation studies to show the relationship between interval size, and attack success rates is particularly interesting.\\n\\n## Weak Points\\nThere could be a better explanation of why error-based intervals work effectively - there are strong results, but there is a mention of why error-based intervals may not preserve quantization which deserves more time. It\\u2019s also not made clear if those sorts of edge cases could appear in practice. \\n\\nIt could also be worth elaborating on existing attack methods further - they state that previous approaches are applicable to optimisation-based quantization, but could provide some more insight based on their research. \\n\\nI also noticed that attack success varies considerably across different settings. For instance, the method performs better at content injection (\\u2206=85.0%) than instruction refusal (\\u2206=30.1%), but the paper doesn't provide a thorough explanation for these performance differences, which would be valuable for understanding the attack's limitations. \\n\\nI would have benefitted from a more broken down explanation of GGUF k-quant algorithms than that given in Section 3.1 and 3.2, which is very deep to begin with and was challenging as a reader without prior knowledge of the intricacies of the quantization methods. For a paper with clear practical implications this is especially important.\\n\\n\\n## Questions\\n1. How robust is the attack against variations in the GGUF implementation? Would minor changes in the algorithm break this approach?\\n2. For the over-refusal scenario where success rates were lower (\\u2206=30.1%), have you explored whether multiple rounds of your attack (iteratively refining the intervals) could improve success rates? I'm wondering if there are ways to boost performance in this more challenging setting.\\n3. Looking beyond GGUF, how might your approach be extended to other optimization-based quantization methods? Do your conclusions regarding error-based intervals generalize to other popular quantization frameworks?\", \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"title\": \"Review\", \"review\": \"This paper innovatively presents the first attack on GGUF quantization, and conducts experiments across three LLMs, nine GGUF quantization data types, and three attack scenarios. The introduction of the error - based interval estimation method is innovative and enables attacks on complex quantization types.\\n\\n**Questions:**\\n\\n1. The paper focuses only on GGUF quantization. Will the findings be extended to other quantization families?\\n2. Model quantization is used in large models like 70B - parameter ones. How will the proposed method perform on even larger models?\\n3. Does the paper consider the impact of common protective measures on the feasibility of the attack?\\n4. Is this paper's attack method applicable to multimodal model quantization?\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"Solid work with strong practical applications.\", \"review\": \"This work introduces the first attack on the GGUF quantization method. The authors leverage quantization error to construct a malicious LLM. As GGUF is often used to quantize models in practice, this work has high practical significance.\", \"strength\": [\"This work proposes attacks on optimization-based quantization methods (arguably, most used in practice), extending previous works on zero-shot quantization.\", \"Good technical contribution: the proposed method finds intervals in which the model weights could be changed, so that the weights do not change after applying k-quant method, extending previous work on zero-shot quantization (which could not be applied to k-quant)\", \"Experiments are solid, as the authors cover many quantization types, attacks, etc.\", \"The paper is really well structured and was a pleasure to read. The authors go as far as to give a formal definition of the k-quant algorithm (which was implemented in practice but wasn\\u2019t covered in literature), which made the reading a smooth experience. I am not familiar with attacks on quantization models, and this paper served a good introduction to the topic.\"], \"weakness\": [\"The method is approximate. As the authors discuss, the method preserves quantization only for \\u201cthe most\\u201d of the weights. While they show in practice the number of unpreserved weights is low, it would be good to also have some theoretical guarantees here as well.\", \"I am not an expert in quantization, it's possible I missed some faults (even if I did, this work is for sure well above acceptance threshold for a workshop).\"], \"rating\": \"9\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"comment\": \"The key merit of this paper is to present the first attack on GGUF quantization and its thorough experimentation.\", \"title\": \"Paper Decision\"}"
]
} |
XPZFcs19ud | Unveiling Control Vectors in Language Models with Sparse Autoencoders | [] | Sparse autoencoders have recently emerged as a promising tool for explaining the internal mechanisms of large language models by disentangling complex activations into interpretable features. However, understanding the role and behavior of individual SAE features remains challenging. Prior approaches primarily focus on interpreting SAE features based on their activations or input correlations, which provide limited insight into their influence on model outputs. In this work, we investigate a specific subset of SAE features that directly control the generation behavior of LLMs. We term these “generation features”, as they reliably trigger the generation of specific tokens or semantically related token groups when activated, regardless of input context. Using a systematic methodology based on causal intervention, we identify and validate these features with significantly higher precision than baseline methods. Through extensive experiments on the Gemma models, we demonstrate that generation features reveal interesting phenomena about both the LLM and SAE architectures. These findings deepen our understanding of the generative mechanisms within LLMs and highlight the potential of SAEs for controlled text generation and model interpretability. Our code is available at https://anonymous.4open.science/r/control-vector-with-sae-AAFB. | [
"sparse autoencoder",
"controllability",
"model explanation",
"mechanistic interpretability"
] | Reject | https://openreview.net/pdf?id=XPZFcs19ud | https://openreview.net/forum?id=XPZFcs19ud | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"pKO3D3kEoZ",
"ov1dfW2xh1",
"OVuqhuI3AG",
"F76fmIkMUr"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1739689060579,
1740895296832,
1739987072098,
1741078879742
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission95/Reviewer_B3KQ"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission95/Reviewer_uyKU"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission95/Reviewer_zugh"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Official Review of Submission95\", \"review\": [\"Strengths\", \"**Clear Methodology** - Paper presents a systematic approach based on causal intervention for identifying generation features. The methodology is explained thoroughly and builds logically from established theoretical foundations (eg do operator).\", \"**Well-Structured Organization** - The paper maintains structural coherence throughout and was easy to follow.\", \"**Somewhat Comprehensive Empirical Analysis** - The research provides empirical investigation of Gemma models, specifically examining how sparsity and width influence generation features. Their layer-wise distribution analysis offers some valuable insights into model behavior and feature interactions. However, I would have liked to see models from outside the Gemma family been evaluated as well, given the availability of open-weights SAEs (eg Llama-Scope)\", \"Weaknesses\", \"**Limited Innovation and Practical Impact** - The fundamental concept of identifying SAE features that predictably increase logits for specific token sets lacks novelty within the field. Similar approaches have been extensively explored in interpretability research, and the paper doesn't sufficiently differentiate its contributions. The single-token focus appears arbitrary and lacks clear practical value. The paper's central concept of \\\"generation features\\\" appears somewhat self-evident, as the existence of correlations between sparse features and token generation could be reasonably expected. The research fails to demonstrate why this finding represents a significant advancement in our understanding of language models.\", \"**Overly Restricted Scope** - The research's emphasis on single-token or small token set control significantly limits its applicability. While the authors acknowledge broader work on style and sentiment control, they don't adequately justify their narrow focus. This limitation substantially reduces the practical utility of their findings for real-world applications. Despite mentioning possible applications in controlled text generation and model interpretability, the paper fails to demonstrate compelling real-world use cases.\", \"**Inadequate Theoretical Foundation** - The \\\"Zero Baseline Assumption,\\\" while convenient for analysis, lacks robust justification. The paper would benefit from a more thorough theoretical explanation and empirical validation of this assumption, particularly regarding its applicability across different contexts.\", \"Questions\", \"See Weaknesses\"], \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"title\": \"Identifying SAE latents with a large effect on language model outputs\", \"review\": [\"This paper demonstrates an approach for understanding which SAE latents have a large effect on a language model\\u2019s outputs. The paper is hard to follow and lacks detailed explanations for important assumptions. It doesn\\u2019t provide a clear motivation for why the its perspective of \\u201cgeneration latents\\u201d is a more useful model then measuring the attribution of a latent on outputs.\", \"# Strengths\", \"The paper touches on an interesting direction. Identifying SAE latents with a large effect on the output could produce useful, practical applications of interpretability.\", \"# Weaknesses\", \"The formalization of causal interventions with do-calculus is unnecessary, lengthy, and detracts from the narrative of the paper.\", \"Generation is measured by how often scaling the latent leads to a specific token appearing in the model\\u2019s outputs. There is no analysis done on whether steering the latent scales the log-probs of the token which might provide a cheaper, more granular analysis of generation latents.\", \"\\u201cGeneration latents\\u201d is an inaccurate perspective. Scaling a latent has some downstream effect, mediated by intermediate components in the transformer. Some latents have a more direct effect on the outputs than others.\", \"There\\u2019s no motivation for measuring the correlation between l0 and the number of generations latents per layer.\", \"Categories are chosen arbitrarily without an explanation for why they are important types of generation latents to understand.\", \"Ten prompts for latent identification is not enough to make a significant claim on the number of SAE latents.\", \"The amount that a latent needs to be scaled varies; there\\u2019s no information on the scaling coefficients for specific latents. Scaling equally for all latents might not elicit information.\", \"There\\u2019s no explanation on what the error term in equation 6 is for.\"], \"rating\": \"2\", \"confidence\": \"5\"}",
"{\"title\": \"Interesting application of SAEs to controllable generation; weak accept\", \"review\": \"SAEs are used to find the building block features of model activations. There are two problems with this: 1) because this is an unsupervised method, we have to manually identify the semantic meaning of SAE latents which is difficult 2) there is no guarantee that latents are causally used by the model. This paper identifies a subset of latents that have causal power in making the model output specific generations when steered with. Steering here is setting activations to certain values.\\n\\nOverall, I find the use of SAEs for controllable generation to be an interesting application. I weakly accept because I believe better comparative baselines could have been chosen.\\n\\nStrengths \\n- The paper identifies an interesting two-way question. 1) Can controllable generation be used to interpret SAE latents and 2) can SAE latents be used to help controllable generation. Both are interesting directions. The latter especially is useful, because mechanistic interpretability as a field is struggling to find real-world use cases for SAEs. \\n- The paper is well written and has many relevant experimental results. There are good robustness checks on SAE width and sparsity. I also liked the investigation of layer depth.\\n\\nWeaknesses\\n- It's not clear to me how useful controllable generation is, especially in the single-word regime. Multi-token controllable generation seems more useful, but I fail to see the utility of having a model that predictably always outputs a single phrase. It's just an if-statement at that point.\\n- I think this paper could be made stronger by comparing against other non-SAE baseline method. I.e., the exploration of the logit lens baseline still uses SAE features as a basis to consider. What about a supervised baseline where we take a hundred prompts where the next token is \\\"black,\\\" take the average activation on the last token, and use that to steer with rather than the SAE latent for \\\"black\\\" (Table 4 first row). There's an argument to be made that unsupervised methods are more valuable than supervised probes, but I think additional discussion that considers stronger baselines could improve the paper.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper lacks a clear motivation for its \\\"generation latents\\\" perspective and does not convincingly demonstrate its novelty beyond existing interpretability research. Additionally, its scope is overly restricted to single-token control, limiting real-world applicability, while key methodological choices\\u2014such as causal intervention formalization and latent scaling\\u2014lack sufficient justification or comparative baselines.\", \"title\": \"Paper Decision\"}"
]
} |
XMXWk83dah | Copilot Evaluation Harness: Building User Trust in LLMs and LM Agents for IDE Environments | [] | The addition of Large Language Models (LLMs) into Integrated Code Development Environments (IDEs) has become a focal point in modern software development. LLMs offer the potential to significantly augment developer productivity by serving as intelligent, chat-driven programming assistants, especially with the increase in LLM-driven coding agents. With these tools comes the need for safeguards and metrics for quality assurance for consumers. In this paper, we introduce the Copilot Evaluation Harness: a set of data and tools for evaluating LLM-guided coding, covering various programming scenarios and languages. We propose a more robust system for measuring and understanding model behavior when leveraged as chat coding assistants or coding agents than previous state of the art evaluation metrics.
We design and compute both static and execution-based success metrics on a wide range of developer tasks, including documentation generation from code (doc), test case generation (test), and bug-fixing (fix). In the chat scenario, we see that GPT4o has much lower prompt sensitivity than the other models. In the agentic scenario, we find that reasoning models are more inclined to generate one-shot solutions, even when given multiple turns and access to tool calling. We show how results from our metrics can be used to increase the interpretability and explainability of LLMs in the real-world IDE-chat scenario. | [
"Machine Learning",
"LLMs for Code Generation",
"Reliability",
"Execution-Based Benchmark"
] | Reject | https://openreview.net/pdf?id=XMXWk83dah | https://openreview.net/forum?id=XMXWk83dah | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"rtwFy2Eqe6",
"hOXutXK31e",
"EL1L540XRn",
"AXPuV3mrnL"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740857153900,
1741144592120,
1740879756534,
1740822163324
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission108/Reviewer_bjs7"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission108/Reviewer_sH7b"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission108/Reviewer_EVPA"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": \"This work proposes a new benchmark / harness along the lines of SWE-Bench, but testing a more general capability: interactive IDE scenarios.\\n\\nI find it significant because AFAICT there are no other benchmarks that test this realistic user scenario. SWE-Bench has a much more restricted setting. General purpose coding assistants will need to be integrated with a user's development environment, rather than being restricted to make PRs based on well-defined bug requests / issues. \\n\\nEach of the scenarios evaluated makes sense, and I think the bug-fixing scenario is particularly interesting given the access to static analyzer errors. I find that existing models are poor at dealing with static analyzer errors. Finally, the analysis of existing models and their sensitivity to instructions is an interesting and useful empirical finding.\", \"rating\": \"10\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"comment\": \"As noted by the three reviewers, the topic is interesting. However, the execution is incomplete, with unclear dataset composition, missing technical details, and contradictions in problem formulation. Additionally, the evaluation is still in progress, raising concerns about the readiness of the work for publication. While one reviewer rates the paper highly, the majority highlight significant gaps. Given the paper's lack of completeness, I recommend rejection, with the suggestion to refine and resubmit when the evaluation is more robust.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Great step forward for LLM+Code evals but lacking rigor\", \"review\": [\"This paper proposes a new evaluation harness with new metrics that increases breadth as compared to previous works. It proposes 4 different tasks in two different settings (single turn vs multi-turn): 1) document generation, 2) bug fixing, 3) code generation, and 4) test case generation. As it correctly points out, existing popular benchmarks like HumanEval and SWE-Bench are more limited in what they are measuring. However, while the paper does increase breadth, it lacks some depth and fails to acknowledge it at several key points. Furthermore, some important design decisions and technical details are missing. Nonetheless, the initial experiments show some interesting insights which will undoubtedly play a role in 2025 if they do indeed hold true. That is, reasoning models are a poor fit in agentic settings.\", \"The paper mentions that 5 and 6 languages are used. It seems only 5 were evaluated so that should be updated.\", \"Repositories are filtered by size & in \\\"generate\\\" settings, \\\"correctness is measured by running the repository's test suite\\\". There is a critical oversight in that the ratio of tests to lines of code or separately, the code coverage of repository is not considered.\", \"A build agent is used throughout the evaluations and details of this agent are left out. It's unclear if any evaluation errors could have been from this agent.\", \"In the \\\"doc\\\" setting, there are essentially syntactic checks but no semantic checks. This oversight is not even acknowledged.\", \"In \\\"fix\\\", the authors mention a tradeoff between selecting from two different metrics but fail to consider that both could be possible at the same time.\", \"It would be helpful if the best model per row in the tables were bold\", \"The chat problem formulation is contradicting. In the problem formulation, it says only a one line query is specified but in the next paragraph, we see that the bug scenario uses few-shot prompting. This raises questions about the overall insights.\", \"In the agentic scenario, the terminal commands are not specified.\", \"In related works, \\\"Unlike traditional machine learning models where k-fold cross validation was a common evaluation process, LLMs are often evaluated using static data sets\\\". I dont follow this statement. Computer vision uses other metrics and even within NLP, perplexity is still used. There are also some new ones like false refusal rates or verbatim usage across n-grams.\", \"SE is not defined\", \"Given the shacky evaluations and oversights, this paper has potential but is not read as is.\"], \"rating\": \"5\", \"confidence\": \"5\"}",
"{\"title\": \"Review to Copilot Evaluation Harness\", \"review\": [\"This paper presents an evaluation harness for LLMs on tasks relevant to interactive software development aided by LLMs: generation of documentation, bug fixing, function generation and test case generation. The authors compile a dataset based on 300 repositories across several programming languages and demonstrate that models have strongly varying performance across these tasks.\", \"The topic of the paper is interesting. The gap between HumanEval and similar code generation benchmarks and SWE-bench and similar repository based generation tasks is huge, with the proposed benchmark providing a promising middle-ground.\", \"While the overall setting is promising, I think the work presented is not ready for in-depth discussion and presentation at a workshop. The paper includes several major flaws, including lack of detail about the dataset and used methods and misleading wording. Presentation of the paper also requires much additional work.\", \"First, the paper seems incomplete and lacks crucial details about the evaluation. While the workshop calls for work in progress and negative results, the paper should at least present a detailed and complete explanation of intermediate results.\", \"The evaluation seems not finished. The authors mention that their setting `generation` is still running and will be added once complete (l.155). I appreciate the honesty but recommend to omit introducing any settings where no results can be presented (as these can not be peer-reviewed!)\", \"The current descriptions leaves me with many open questions.\", \"The authors mention they evaluate \\\"within [...] Visual Studio Code\\\" (l.73). It is not clear to me how this is implemented nor what benefit this brings (as opposed to evaluating in any sufficient developer-setup like setting).\", \"I have trouble understanding how exactly the evaluation integrates \\\"user interaction\\\", Figure 1 is not really helpful in explaining how it works.\", \"There are no details on the exact composition of the final dataset (introduced in sec 2.1). How many repositories remain after filtering? How many functions are picked for doc/test/generate/fix, how long are they on average, are they highly integrated into the environment (many external calls) or isolated (like HumanEval)? Which kind of static errors (statistics about this) are in the fix setting (how many are resolved by LLMs/which ones)?\", \"How is the \\\"coverage\\\" of generated documentation measured? This does not seem like a standard term to me (for documentation)\", \"Do you only consider strict decrease of errors in the fix setting or *also* make sure that the original error disappears? I can imagine that the LLM fixes some formatting related errors but does not actually resolve relevant errors?\", \"Do you maintain only the function signature for the generate setting or do you also check that documentation is present?\", \"How are model patches applied? Do models have to generate patches in a one-shot setting?\", \"The described coding agent in line 115 seems very interesting and I recommend the authors to provide more detail on how and how well it works.\", \"Moreover I think there is actually no interaction in any of the setups (chat or agentic) except for the initial prompt, so how is the harness different to HumanEval/SWE-Bench in this respect (compare l. 69)? If there is just a single prompt, please refer to it this way and don't mention \\\"user interaction\\\" (e.g. Figure 1). The benchmark seems to evaluate LLM performance on Copilot-setting user prompts but not on user interaction, which implies several rounds of editing with user feedback, as done by e.g. Cursor or Aider. LLM-guided coding (as mentioned in the abstract) also seems to be imprecise, as the LLM is still guided by a user prompt and not a user guided by the LLM.\"], \"nitpicks\": [\"For the setting of test generation, SWT-bench and LIBRO seem like related work [1,2]\", \"The test setting appears to be very difficult for code models and agents. This appears surprising as test generation in SWT-Bench [1] appeared to be similarly difficult as patching in SWE-bench.\", \"Please provide averages for languages and LLMs in Table 1/2. Other works [3,4] usually show results with rows per LLM, and I recommend sticking to this formatting to avoid confusion among readers.\", \"Please correctly use citep in line 36 and similar settings\", \"You mention that the eval harness \\\"enables a new level of understanding of model behavior [beyond numeric values of other benchmarks]\\\" (l.68). Why? The main results of your benchmark are numeric values (Table 1/2).\", \"Figure 2-5 provide too much detail for the main text. I recommend providing only a summary of the results and moving the figures to the appendix.\", \"l153 you refer to 4 LLMs as \\\"myriad\\\". This term is already very informal and I don't think it fits either (I would definitely expect > 4 evaluated LLMs for this word)\", \"[1] M\\u00fcndler et al. *SWT-Bench: Testing and Validating Real-World Bug-Fixes with Code Agents*\", \"[2] Kang et al. *Large Language Models are Few-shot Testers: Exploring LLM-based General Bug Reproduction*\", \"[3] Chen et al *Evaluating Large Language Models Trained on Code*\", \"[4] Jiminez et al. *SWE-bench: Can Language Models Resolve Real-World GitHub Issues?*\"], \"rating\": \"4\", \"confidence\": \"5\"}"
]
} |
WtFcq17viH | UTF: Undertrained Tokens as Fingerprints —— A Novel Approach to LLM Identification | [] | Fingerprinting large language models (LLMs) is essential for verifying model ownership, ensuring authenticity, and preventing misuse. Traditional fingerprinting methods often require significant computational overhead or white-box verification access. In this paper, we introduce UTF, a novel and efficient approach to fingerprinting LLMs by leveraging under-trained tokens. Under-trained tokens are tokens that the model has not fully learned during its training phase. By utilizing these tokens, we perform supervised fine-tuning to embed specific input-output pairs into the model. This process allows the LLM to produce predetermined outputs when presented with certain inputs, effectively embedding a unique fingerprint.
Our method has minimal overhead and impact on model's performance, and does not require white-box access to target model's ownership identification. Compared to existing fingerprinting methods, UTF is also more effective and robust to fine-tuning and random guess. | [
"Large Language Model",
"fingerprint"
] | Reject | https://openreview.net/pdf?id=WtFcq17viH | https://openreview.net/forum?id=WtFcq17viH | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"fEexDHDYTt",
"A7VbzvZaa0",
"A5gORJIza5",
"61FVGBIp7w"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1739700790221,
1740439253937,
1741079837531,
1740871144548
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission137/Reviewer_YvbM"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission137/Reviewer_invm"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission137/Reviewer_F3jw"
]
],
"structured_content_str": [
"{\"title\": \"Fingerprinting paper, a little incomplete\", \"review\": [\"# Summary\", \"This paper discusses the field of fingerprinting, which is used to verify model ownership.\", \"The proposed method, UTF, involves training to associate pairs of tokens (x, y) that are not sufficiently trained during pre-training. During inference, when x is the input, it becomes easier to output y.\", \"# Strengths\", \"Compared to existing research by Xu et al., this paper achieves improvements in many aspects, including black-box access, minimal impact on performance, reduced training time, robustness to subsequent additional training, and reduction of false positives.\", \"The evaluation is comprehensive, covering five aspects: Effectiveness, Reliability, Efficiency, Harmlessness, and Persistence.\", \"# Weaknesses and Unclear Points\", \"Structure: With only a little over five pages out of the allowed nine, the paper feels incomplete. Additionally, there are several structural issues:\", \"On Page 3, Figure 2 occupies too much space.\", \"It is difficult to find the text for Section 2.1 around line 135 on Page 3.\", \"The numbers (0-5) in the labels of Figure 3 are meaningless.\", \"Around line 199 on Page 4, the explanation of Baseline Methods is insufficient, and the settings for each are unclear. It is not clear what \\\"IF\\\" stands for.\", \"Organization of Related Literature: Throughout the text, it sounds like mentioning multiple prior studies, but only a comparison with Xu et al. is made. A comprehensive review of the fingerprinting field is necessary. Additionally, it would be beneficial to clearly distinguish this field from adjacent fields such as watermarking, and discuss why fingerprinting is important.\", \"Overall, while the paper addresses significant issues regarding model ownership, it fails to effectively convey the importance and merits of its methods, and thus, I determined it does not meet the acceptance threshold. By fully utilizing the allowed page count to enhance explanations of related research and methodologies, this could become a good paper.\"], \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"title\": \"Official Review\", \"review\": [\"## Summary\", \"The paper proposes using under-trained tokens to construct fingerprint strings for LLMs. This leverages ideas from prior work which finds undertrained tokens by inspecting the unembedding matrix. The under-trained tokens are concatenated to create a fingerprint string, which the LLM is then trained with. The paper demonstrates that such fingerprints are faster to embed, more harmless and persistent after fine-tuning.\", \"## Strengths\", \"The method is very intuitive, and a straight-forward extension of works in both fingerprinting and LLM identification (i.e. under-trained tokens)\", \"The empirical results are promising.\", \"The paper is well-written overall\", \"## Weaknesses\", \"I find it a bit hard to believe that the effectiveness of fingerprinting methods is less than 100%. In theory one can train a model long enough to make it memorize any piece of text. Why is it that certain fingerprints are not reliably memorized then?\", \"The baselines are not clearly described in the paper. Specifically, I could not understand the difference between IF and UTF_{IF}.\", \"An analysis of $\\\\tau$, which is the hyper-parameter constrolling what fraction of under-trained tokens are used will be insightful.\", \"An important security risk not considered by the paper is input-filtering, where the input might be blocked because it contains gibberish text. This is a practical attack which is outside the scope of the paper.\"], \"rating\": \"7\", \"confidence\": \"5\"}",
"{\"decision\": \"Reject\", \"comment\": \"The paper lacks comprehensive experimental validation, as most results focus on Llama2, raising concerns about generalizability across different model architectures. Additionally, key implementation details, such as the frequency and impact of fingerprint repetition, are ambiguous, and the paper does not sufficiently clarify baseline methods, structural organization, or security risks like input filtering.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The proposed fingerprinting method shows promise with its straightforward approach, but lacks of important details,\", \"review\": \"The approach leverages tokens that appear infrequently in training data. The proposed method creates a fingerprinting mechanism that exploits the model\\u2019s limited exposure to these tokens during pre-training. The authors aim to address fundamental limitations in existing fingerprinting techniques: non-realistic scenario of white-box access during validation, utility trade-off, or effectively embedding the fingerprint.\\n\\nThe paper clearly presents its methodology, making the technical approach accessible and potentially reproducible. The straightforward nature of the technique enhances its appeal for practical deployment scenarios.\", \"weaknesses\": \"- While four models were tested, Table 3 only reports comprehensive results for Llama2, raising questions about the method\\u2019s generalizability across model architectures.\\nThe experimental protocol lacks details. For instance, the implementation details are ambiguous, particularly concerning the frequency and pattern of how many times the fingerprint is repeated. Also, how does this hyperparameter impact the method?\\nExploring other adaptation techniques, such as LoRA, would enhance the work\\u2019s practical utility in real-world deployment scenarios.\", \"rating\": \"5\", \"confidence\": \"3\"}"
]
} |
VkDVXt5Cr7 | Detecting Unreliable Responses in Generative Vision-Language Models via Visual Uncertainty | [] | Building trust in vision-language models (VLMs) requires reliable uncertainty estimation (UE) to detect unreliable generations. Existing UE approaches often require access to internal model representations to train an uncertainty estimator, which may not always be feasible. Black-box methods primarily rely on language-based augmentations, such as question rephrasings or sub-question modules, to detect unreliable generations. However, the role of visual information in UE remains largely underexplored. To study this aspect of the UE research problem, we investigate a visual contrast approach that perturbs input images by removing visual evidence relevant to the question and measures changes in the output distribution. We hypothesize that for unreliable generations, the output token distributions from an augmented and unaugmented image remain similar despite the removal of key visual information in the augmented image. We evaluate this method on the A-OKVQA dataset using four popular pre-trained VLMs. Our results demonstrate that visual contrast, even when applied only at the first token, can be as effective as—if not always superior to—existing state-of-the-art probability-based black-box methods. | [
"Visual Uncertainty"
] | Reject | https://openreview.net/pdf?id=VkDVXt5Cr7 | https://openreview.net/forum?id=VkDVXt5Cr7 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"esgLFOG9Ph",
"VWESATXb9L",
"IPDYCcakEj",
"0iijk34xbV"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740007906831,
1740856678628,
1740458140558,
1740911491697
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission90/Reviewer_wgb5"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission90/Reviewer_x1dF"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission90/Reviewer_f3T7"
]
],
"structured_content_str": [
"{\"title\": \"This paper tackles an interesting problem (uncertainty estimation for VLMs) and suggests that this can be better estimated through applying perturbations to the underlying image. However, I have concerns about how and where these perturbations can be applied.\", \"review\": [\"### Strengths:\", \"This paper centers on improving uncertainty estimation in VLMs. I can imagine many useful applications of this topic and would like to see more research in this area.\", \"The introduction does a nice job of summarizing the literature. This was a good summary for someone who does not research in this area.\", \"The results indicate this method achieves similar performance to more computationally intensive methods.\", \"### Weaknesses:\", \"My main concern with this work is in the selection of visual evidence relevant to the question. If we are using the model itself to select the relevant visual evidence (as is done under the attention-based mask), there is a significant difference between a confidently/unconfidently incorrect model. A confident model may be focusing on the wrong parts of the image and would therefore mask these incorrect parts (in fact, this might help to make it more correct in its outputs). Perhaps this is why the attention-based mask does not perform as well as the black image or diffusion noise. If replacing the image with an all-black image is the plan for the method, this is a much different method that one that *perturbs input images by removing visual evidence relevant to the question*.\", \"Your results indicate similar performance to the UE method Length-Normalized confidence. What is the advantage of your method?\", \"In a full length paper, I believe Results sections 2 and 3 would best belong in an appendix. Instead I would have liked to see more discussion about the advantages of your method versus more computationally intensive methods.\", \"It is not clear to me how GPT-3.5 was used as an evaluator. I could have used more description about this in the evaluation metric section. How accuracy is GPT-3.5 as an evaluator?\", \"### Typos/Suggestions:\", \"Lines 31, 53: commonsense should be common sense\", \"Line 86: *improved version* is quite vague, in what sense is it an improvement?\", \"Formatting on split between page 2 and page 3 is very difficult to read\", \"Table 2 is likely better represented within text rather than as a table\", \"I would have liked the results section to have more discussion on Table 1, with less emphasis on Table 2 and 3\"], \"rating\": \"4\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Novel and Cheap UE Method that Could Use Better Use-Case Framing\", \"review\": \"The authors propose a new UE method for unreliable generations that is model agnostic and relatively cheap to compute. I think the general concept of UE and the proposed method are relatively clear, although the related work/literature review is somewhat unclear and hard to get through. The combination of proposed method and lit review makes it hard to follow as well. The proposed method appears to perform decently well/somewhat competitive against other (said to be) expensive baselines, although it does not \\\"win\\\" in any benchmark dataset, which is unfortunate. I think this paper would be stronger if computational runtimes / walk clock times were also compared as that appears to be the edge of this method, for now. Some more analysis on why the top1 token is the most effective would be interesting, and give more intuition on the effectiveness and potential of this method. In summary:\", \"pros\": [\"decently performing model that its cheap to commute (and appears to be novel/original)\", \"straight forward to take away problem context and proposed method's strength from the workshop paper\", \"UE seems to be pretty significant, especially with rise of LLMs/VLMs, but could use better setting up (see con 1)\"], \"cons\": [\"proposed model doesn't perform well enough to justify use w/o time comparisons (and better framing that perhaps in some use cases UE has to occur very fast, so computational performance tradeoff is justified)\", \"more clarity in related works section and intuition for some proposed method choices\"], \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"The paper focuses on uncertainty estimation of vision language models (VLM) and address it using visual contrast approach. The paper also focuses on effect of distance metrics on AUROC and uncertainty estimators on VLM.\", \"review\": \"Strengths:\\n1. The paper is easy to read and follow\\n2. The paper proposes method of uncertainty estimation in VLMs using visual contrast.\", \"weakness\": \"1. There is no convincing evidence for the hypothesis that visual contrast is the best uncertainty estimator. In results it is not proving the same hypothesis not for any model, not on average over models. The authors have not experimented on various other datasets for visual question answering.\\n2. While using the visual contrast approach, the complexity of question is low or easy, the authors can experiment on combination of visual contrast approach and language based uncertainty methods.\\n3. Intuitive explanation of why black image is better than diffusion noise method for data augmentation.\\n4.The technical quality of the paper is low\", \"rating\": \"6\", \"confidence\": \"3\"}"
]
} |
VbG9sIsn4F | Model Evaluations Need Rigorous and Transparent Human Baselines | [
"Kevin Wei",
"Patricia Paskov",
"Sunishchal Dev",
"Michael J Byun",
"Anka Reuel",
"Xavier Roberts-Gaal",
"Rachel Calcott",
"Evie Coxon",
"Chinmay Deshpande"
] | **This position paper argues that human baselines in foundation model evaluations must be more rigorous and more transparent to enable meaningful comparisons of human vs. AI performance.** Human performance baselines are vital for the machine learning community, downstream users, and policymakers to interpret AI evaluations. Models are often claimed to achieve "super-human" performance, but existing baselining methods are neither sufficiently rigorous nor sufficiently well-documented to robustly measure and assess performance differences. Based on a meta-review of the measurement theory and AI evaluation literatures, we derive a framework for assessing human baselining methods. We then use our framework to systematically review 113 human baselines (studies) in foundation model evaluations, identifying shortcomings in existing baselining methods. We publish our framework as a reporting checklist for researchers conducting human baseline studies. We hope our work can advance more rigorous AI evaluation practices that can better serve both the research community and policymakers. | [
"human baseline",
"human performance",
"human performance baseline",
"science of evaluations",
"AI evaluation",
"model evaluation",
"LLM evaluation",
"evaluation methodology",
"language model",
"foundation model"
] | Accept | https://openreview.net/pdf?id=VbG9sIsn4F | https://openreview.net/forum?id=VbG9sIsn4F | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"iUXS48IHUz",
"c9yQmpS72e",
"MrFPmnsGU6",
"40327cwTAT"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741078564034,
1740595570897,
1740897924367,
1740721751380
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission46/Reviewer_9kLM"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission46/Reviewer_XY4u"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission46/Reviewer_nBXj"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"comment\": \"This well-crafted position paper highlights the inconsistencies in human performance baselines for AI evaluations, emphasizing the need for rigor and transparency to ensure meaningful comparisons. It introduces a measurement theory-based framework and checklist to improve baseline design, though implementation may be resource-intensive for smaller research teams.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"A nice position paper\", \"review\": \"The paper is well written and insightful. It analyzes and discusses many aspects in successful human baselines for the evaluation of foundation models. The literature research is also extensive.\", \"small_comments_to_the_authors\": [\"Lines 326-328 seem contradictory to what you say in paragraph 305-312, maybe it is worth it to make your point more clear here.\", \"Lines 334-336 seem like a left-in comment and not part of the text?\", \"Line 338 \\\"turnbe\\\" typo\", \"Line 397 \\\"andresources\\\" typo\", \"Line 402 I think a comma is missing\"], \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Review\", \"review\": \"This is a well-crafted position paper that thoughtfully tackles an important issue in the field. It nicely articulates the problem of inconsistent and poorly defined human baselines in foundation model evaluations, shedding light on a critical challenge that deserves more attention.\\nIt\\u2019s great to see the authors\\u2019 detailed analysis of 113 papers which identifies recurring problems. \\nTheir systematic review exposes key shortcomings, such as inconsistencies in test sets and weak sampling methods, providing a clear picture of the flaws in current practices.\\nIt\\u2019s nice to see actionable tools are proposed like the checklist, as it offers researchers a structured way to improve the quality and transparency of human baselines in future AI studies.\", \"rating\": \"7\", \"confidence\": \"2\"}",
"{\"title\": \"Review\", \"review\": \"Summary:\\n\\nThe paper is a position paper, which argues that human performance baselines in AI evaluations must be more rigorously designed and transparently reported to enable meaningful comparisons. The authors highlight that many AI models claim superhuman performance, but the methods used to establish human baselines often lack rigor, such as small or biased sample sizes, inadequate controls for confounding variables, and inconsistent measurement frameworks. To address these shortcomings, the paper proposes a framework based on measurement theory. It assesses the validity and reliability of human baseline methods and was used to review 113 existing human baseline studies in AI evaluations. The authors provide a checklist for researchers to improve the design, implementation, and reporting of human baselines, aiming to enhance the trustworthiness of AI performance assessments.\", \"strengths\": [\"The paper discusses a critical issue in AI evaluation, emphasizing the need for rigorous and transparent human baselines. This is particularly important for trustworthyness given the increasing claims of AI systems outperforming humans in various tasks.\", \"The proposed framework, grounded in measurement theory, provides a structured approach to evaluating and improving human baseline methodologies. The checklist also offers practical guidance for researchers.\"], \"weaknesses\": [\"While the framework is rigorous, implementing it fully seems to be resource-intensive. A discussion on cost-effective strategies for smaller research teams or practitioners could improve its accessibility.\"], \"overall_evaluation\": \"This is a well-researched and timely position paper, and it fits the workshop, I recommend accept.\", \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
VSSQud4diJ | The Jailbreak Tax: How Useful are Your Jailbreak Outputs? | [
"Kristina Nikolić",
"Luze Sun",
"Jie Zhang",
"Florian Tramèr"
] | Jailbreak attacks bypass the guardrails of large language models to produce harmful outputs.
In this paper, we ask whether the model outputs produced by existing jailbreaks are actually *useful*. For example, when jailbreaking a model to give instructions for building a bomb, does the jailbreak yield good instructions?
Since the utility of most unsafe answers (e.g., bomb instructions) is hard to evaluate rigorously, we build new jailbreak evaluation sets with known ground truth answers, by aligning models to refuse questions related to benign and easy-to-evaluate topics (e.g., biology or math).
Our evaluation of eight representative jailbreaks across five utility benchmarks reveals a consistent drop in model utility in jailbroken responses, which we term the *jailbreak tax*. For example, while all jailbreaks we tested bypass guardrails in models aligned to refuse to answer math, this comes at the expense of a drop of up to 92% in accuracy.
Overall, our work proposes the jailbreak tax as a new important metric in AI safety, and introduces benchmarks to evaluate existing and future jailbreaks. We make the benchmark available at https://github.com/ethz-spylab/jailbreak-tax | [
"large language models",
"LLMs",
"jailbreaks",
"benchmark",
"utility"
] | Accept | https://openreview.net/pdf?id=VSSQud4diJ | https://openreview.net/forum?id=VSSQud4diJ | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"tCfVMAyhuX",
"G1zY34gcm1",
"Bf4VHnHoV9",
"3XITd513pK"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741103377448,
1740758169209,
1739708450050,
1740721314962
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission44/Reviewer_t62Q"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission44/Reviewer_CNDd"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission44/Reviewer_2t2A"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Novel and well-executed work on assessing risks of jailbroken language models\", \"review\": \"## Summary\\nThis paper introduces the concept of a **jailbreak tax** \\u2014 a measure of how much model utility degrades when jailbreak techniques bypass safety guardrails. The authors propose a novel evaluation framework that avoids the challenges of evaluating real-world dangerous outputs by using benign, easily-verified tasks as proxies. They align models to refuse these benign tasks using several alignment techniques, apply various jailbreak techniques, and then measure the degradation in utility compared to unaligned models.\\n\\n---\\n\\n## Strengths\\n\\n**Clear and Well-Motivated Core Idea** \\nThe authors clearly put forward the case that while previous work has focused primarily on jailbreak success rate (i.e., whether models can be made to respond to harmful queries), the question of how these jailbreaks affect the model\\u2019s capabilities remains understudied. Their introduction of a jailbreak tax as a metric for quantifying capability degradation during jailbreaking provides a useful framework for evaluating the true effectiveness of different attack methods.\\n\\n**Evaluation Framework** \\nThey propose a clever evaluation framework for measuring the utility of jailbroken models by getting models to refuse questions with known ground-truth answers, thereby avoiding challenges associated with evaluating genuinely harmful content.\\n\\n**Comprehensive Experimental Design** \\nThe paper covers a good range of jailbreak methods across three model sizes and multiple alignment techniques. The inclusion of the UnicornMath dataset as a control for out-of-distribution effects demonstrates careful attention to experimental design.\\n\\n**Interesting Results**\", \"the_paper_provides_several_interesting_findings_that_have_the_potential_to_impact_the_jailbreak_mitigation_work_of_language_model_developers\": \"- **The lack of correlation between jailbreak success rate and jailbreak tax** \\n The authors note that even when a jailbreak technique frequently succeeds in eliciting a response, it does not necessarily impose a high utility cost on the model.\\n\\n- **Larger models do not reduce the jailbreak tax** \\n The results suggest that scaling model size does not inherently mitigate the utility losses imposed by jailbreaks.\\n\\n- **Harder tasks incur higher jailbreak tax** \\n They demonstrate that more challenging queries lead to greater performance degradation when the model is jailbroken.\\n\\n**Clear Presentation** \\nThe paper is clear and well-written. \\n\\n---\\n\\n## Weaknesses\\n\\n**Lacking Comparison to Prior Work** \\nWhile the paper references other jailbreak benchmarks such as StrongREJECT, it would be helpful if the authors provided a more explicit comparison of their findings with prior work. For instance, Figure 4 of StrongREJECT suggests a correlation between the rate of non-refusal and model capability, whereas this paper asserts that there is \\u201cno apparent correlation.\\u201d\\n\\n**Potential Contradiction in the LLaMA 405B Results** \\nThe authors conclude that \\u201cthere is no apparent correlation between a jailbreak\\u2019s success rate and its impact on model utility,\\u201d yet for the LLaMA 405B experiments, the data seems to show a relatively strong correlation. Is this a spurious correlation or does a stronger correlation emerge as models get larger/more capable?\\n\\n**Highlighting the Low-Tax Jailbreak Cases** \\nA notable finding of this work is that for all of the alignment techniques considered, there are existing jailbreaks with low (~0-10%) jailbreak tax. It would be good if this finding were highlighted in the text.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"This is a great paper!\", \"review\": \"# Summary\\nThe paper introduces the concept of a \\\"jailbreak tax,\\\" representing the degradation in model performance when a jailbreak attack successfully circumvents alignment guardrails. This work addresses a critical gap in jailbreak evaluations: existing research often focuses on whether the jailbreak bypasses safety mechanisms but overlooks the utility or correctness of the resulting outputs. The authors construct a novel evaluation methodology by aligning models to refuse benign topics like math and biology and then assessing the correctness of jailbreak outputs against these pseudo-harmful tasks. The experiments span eight jailbreak methods, multiple alignment approaches (system prompt, fine-tuning, harmful task mixing), and three models, revealing substantial variance in jailbreak tax across methods.\\n\\n## Strengths\\n\\n- Evaluating the utility of jailbreak outputs is a critical but underexplored area in AI safety research. This paper takes a meaningful first step toward addressing this gap.\\n\\n- The use of pseudo-harmful tasks (e.g., EvilMath) allows for objective evaluation of jailbreak correctness. The authors employ multiple alignment techniques and jailbreak methods, providing a robust empirical analysis.\\n\\n- The authors systematically analyze the relationship between jailbreak success and utility degradation, identifying methods (e.g., Many-shot) that preserve performance better than others (e.g., PAIR).\\n\\n## Limitations\\n\\n- While the pseudo-harmful task approach is a clever methodological choice, it is not clear how well the results translate to real-world jailbreak scenarios involving genuinely harmful content (e.g., bomb-making instructions or phishing attacks). This is an inherently difficult problem to measure, but the paper could benefit from a more explicit discussion of this limitation.\\n\\n- The evaluation tasks (math, biology multiple-choice) are relatively well-structured with clear ground truth answers. However, real-world jailbreak queries often involve complex, multi-step reasoning or subjective judgments (e.g., generating persuasive phishing emails or constructing realistic social engineering scenarios). Evaluating utility in such open-ended tasks may introduce additional challenges that are not fully captured by the current methodology. While this is a difficult problem, this paper represents a valuable foundational step towards it.\\n\\nOverall, a great contribution!\", \"rating\": \"9\", \"confidence\": \"4\"}",
"{\"title\": \"An interesting investigation about jailbreaking tax\", \"review\": \"This work provides a quantified framework for investigating the 'real' success of a jailbreaking test instead of the rate of non-refusal responses. The motivation is practical and results show that a number of jailbreak methods achieves low jailbreak tax (i.e. the real success) even though they achieve high successful jailbreak rate.\", \"pros\": \"1. The authors employ a well-structured framework and evaluating jailbreak techniques across multiple benchmarks, which allows for a systematic and measurable evaluation of the jailbreak tax.\\n2. The use of datasets including GSM8K, MATH, WMDP ensures that the results is well-quantified.\", \"cons\": \"1. the score of 'utility' definition is limited. The paper define utility with correctness on benchmark tasks, which may not fully capture real-world risks. For example, an incorrect response to a biological weapon-related query could still provide dangerous information while no ground truth is provided.\\n\\nfollow-up question is that could the jailbreak tax generalized to open-ended malicious questions where no ground-truth exists? Open-ended question answering is a more generalizable scene for AI safety from my perspective.\", \"rating\": \"7\", \"confidence\": \"4\"}"
]
} |
UYZCcnwgc4 | Towards Understanding Distilled Reasoning Models: A Representational Approach | [
"David D. Baek",
"Max Tegmark"
] | In this paper, we investigate how model distillation impacts the development of reasoning features in large language models (LLMs). To explore this, we train a crosscoder on Qwen-series models and their fine-tuned variants. Our results suggest that the crosscoder learns features corresponding to various types of reasoning, including self-reflection and computation verification. Moreover, we observe that distilled models contain unique reasoning feature directions, which could be used to steer the model into over-thinking or incisive-thinking mode. In particular, we perform analysis on four specific reasoning categories: (a) self-reflection, (b) deductive reasoning, (c) alternative reasoning, and (d) contrastive reasoning. Finally, we examine the changes in feature geometry resulting from the distillation process and find indications that larger distilled models may develop more structured representations, which correlate with enhanced distillation performance. By providing insights into how distillation modifies the model, our study contributes to enhancing the transparency and reliability of AI systems. | [
"Interpretability",
"Reasoning Model",
"Distillation",
"Sparse Crosscoder"
] | Accept | https://openreview.net/pdf?id=UYZCcnwgc4 | https://openreview.net/forum?id=UYZCcnwgc4 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"s6BQZdU0ck",
"iML8rRU98w",
"4m0Xi0mKKH",
"34u064VqmM"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740815156805,
1740435619658,
1741145442365,
1740246824973
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission120/Reviewer_MB4z"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission120/Reviewer_vDiT"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission120/Reviewer_s2qG"
]
],
"structured_content_str": [
"{\"title\": \"Review of \\u201cUnderstanding Distilled Reasoning Models: A Representational Approach\\u201d\", \"review\": \"### Summary\\nThis paper studies the effect of model distillation on reasoning features in Large Language Models (LLMs). By using a sparse crosscoder approach, the authors compare activation patterns in base vs. distilled Qwen-series models, highlighting \\u201cunique\\u201d directions in which the distilled models appear to form or reshape reasoning features (e.g., self-correction, heuristic reasoning). The analyses suggest that larger distilled models may form more structured representations, as evidenced by lower parallelogram losses on select tasks.\\n\\n---\\n\\n### Strengths\\n- **Novel angle on distillation**: The idea of examining new \\u201creasoning directions\\u201d induced by distillation helps clarify how knowledge transfer might alter internal representations.\\n- **Sparse crosscoder technique**: Using this approach to isolate features linked to reasoning tokens (\\u201cwait,\\u201d \\u201ctherefore,\\u201d etc.) is a creative application of interpretability methods.\\n- **Structured Representations**: The parallelogram-loss analysis for semantic relationships is interesting, and the results that bigger distilled models might achieve better \\u201cfeature geometry\\u201d is a notable hypothesis for future work.\\n\\n---\\n\\n### Weaknesses and Concerns\\n1. **Limited scope and generalizability**: The paper only evaluates Qwen-based models. It\\u2019s unclear whether these results translate to other architectures or instruction-tuned variants.\\n2. **Focus on a single interpretability method**: While the sparse crosscoder is valuable, readers may wonder whether additional methods (e.g., neuron-level causal interventions) could corroborate the findings.\\n3. **Distillation vs. other fine-tuning effects**: The paper does not fully separate how much of the \\u201cunique\\u201d directions in reasoning come specifically from distillation (rather than dataset differences, chain-of-thought, or RLHF).\\n4. **Methodological clarity**: Certain details, like how threshold values were chosen or the exact distribution of data from which tokens were sampled, could be more thoroughly described for easier reproducibility.\\n5. **Lack of detailed analyses**: More analyses or experiments to interpret the results more clearly would make the results much more compelling.\\n\\n---\\n\\n### Recommendation Rationale\\nDespite the interesting premise, the current results feel somewhat narrow in scope, leaving questions about broader applicability and the causal mechanism driving the emergent reasoning directions. Additional experiments (e.g., ablations or controls with different base architectures) would strengthen the paper. As it stands, I consider it **marginally below** the acceptance threshold due to limited robustness of evidence and scope.\\n\\nNevertheless, the direction is promising, and further exploration into whether these findings hold across tasks and model families could produce valuable interpretability insights. With more rigorous experiments and a broader set of models, this work could contribute substantially to our understanding of how knowledge distillation shapes model reasoning.\", \"rating\": \"5\", \"confidence\": \"2\"}",
"{\"title\": \"Review\", \"review\": [\"**Paper Summary**\", \"This paper presents an empirical study of the representational similarity between distilled reasoning models and their base counterparts. The authors employ sparse crosscoders trained on pairs of base and distilled Qwen models to quantify the degree to which features are shared between the models. The paper highlights examples of features that appear exclusively in the distilled models and examines the geometry of the representation space, noting that models with larger parameter counts tend to have a more structured representation space.\", \"**Strengths**\", \"The use of a crosscoder to analyze distilled reasoning models is a creative approach.\", \"The method for distinguishing shared versus non-shared latents based on the relative decoder norm seems to be a promising direction.\", \"**Weaknesses**\", \"The paper does not provide sufficient training details for the crosscoder. It is unclear whether the crosscoder achieves a satisfactory reconstruction loss, and details such as the number of dead latents or the sparsity details are not reported.\", \"In Section 4, the process by which the authors assign meaning to \\u201creasoning\\u201d latents is not clear. The paper describes three latents from the top 16 sorted by normalized relative norm and provides descriptive labels along with an \\u201cactivating example\\u201d for each. However, it is not clear how these descriptions were derived. How was the description obtained? How was the activating examples chosen, is the max-activating example? Are other activating examples consistent with the expiation provided fro the latent?\", \"The takeaways from Figures 2 and 3 are not clearly stated. Can we conclude anything more than \\u201cmost features are shared between the base and distilled models\\u201d?\", \"Some additional details about the experiment in section 6 would be appreciated. E.g., how are the PCA-ed activations computed? What is an example of a semantic parallelogram form the dataset?\", \"Nothing is said about the norm of the encoder. For instance, are there cases of latents for which the normalized relative norm of the decoder is 1 or 0, but the same metric for the encoder is around 0.5, or vice-versa?\", \"**Conclusion**\", \"Overall, crosscoder-based approach for analyzing distilled reasoning models is interesting, but the experimental rigor and clarity could be improved.\"], \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"This paper presents an interesting approach to analyzing reasoning features in distilled LLMs using a sparse crosscoder framework. There are novel insights into how distillation impacts model reasoning representations. The paper highlightsdifferences in reasoning feature geometry between base and distilled models. All reviewers acknowledge the value of the research direction but point out limited scope. Despite these limitations, the work presents an innovative framework that warrants discussion at the workshop. Given the interesting ideas and potential for further exploration, I recommend acceptance as a workshop paper.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Interesting work on reasoning in distilled models\", \"review\": \"This work compares the reasoning behavior across distilled LLMs using crosscoders.\\nOverall, I'm recommending an acceptance because I think the ideas are interesting, and would make for an interesting discussion at the workshop. Moreover, I think this could be a great conference paper with further work. That being said, the authors need to be more careful in the claims they make and improve the writing of the paper/ experiments. Below, I've elaborated on areas that need improvement. \\n\\n[1] Claims \\n\\nThe current research questions are not (fully) reflected or supported by the experiments. Below, I've suggested modifications and/or commented on the types of experiments I would have expected to see.\", \"q1\": \"\\\"What distinctive features do stilled models develop, and how do these features relate to the model reasoning abilities?\\\" -> \\\"What is the overlap between the features represented by the distilled models base model\\\". The authors need to provide further analysis of the features represented if they want to claim that they identify features that differ.\\n\\nQ2 -> \\\"Do distilled models exhibit a greater number of unique features as the base model size increases? If so, how does this divergence scale with model size?\\\" For the second claim, I'd expect the authors to propose a scaling law, or more generally highlight a trend. However, this is not in the current version of the paper. \\n\\nFurther, please be careful with what you attribute to model behavior. It's often possible that the observed trends are a result of the interpretability tool (crosscoders). More concretely, it is possible that the base model and distilled model encode the same feature, but that it is not identified up by the crosscoder. \\nThe authors have tried to tackle this through an ablation study in Section 5. However, I think this ablation measures faithfulness -- whether the features influence the output -- which is not the same. \\n\\n[2] Feedback on experiments\", \"section_5\": \"How are the ablations performed? (What values are the features set to?)\\nMore generally, I'd encourage the authors to be more careful in their conclusions. This experiment seems to test whether the features influence the model output -- not whether the features are unique. To claim that the features are unique, you'd need to show that they do not exist in the other LLM. \\n\\nRegarding the ablation results for the base model, it might be explained by https://arxiv.org/abs/2307.15771.\", \"section_6\": [\"Feature geometry\", \"The experiment setup and results in Section 6 are unclear -- why is this chosen above the original setup in Todd et al. (2023) (from which the task is taken)? In particular, using the first two principal components heavily influences the results. What percentage of the latent space is explained by the first two PCs? Usually the first two PCs are only chosen for visualisation purposes. I think this needs to be further motivated and analyzed. Currently, I'd strongly recommend using the original setup of Todd et al. (2023) instead.\", \"[3] Writing and References\", \"The authors should be more careful with references/ attributions of prior results:\", \"Waswani-> Vaswani\", \"Vaswani et al. did not introduce self-attention, the correct reference is https://arxiv.org/pdf/1601.06733\", \"https://arxiv.org/pdf/2001.08361 and https://arxiv.org/abs/2203.15556 are the correct references for scaling laws of LLMs\", \"[4] Precision of Language\", \"Line 039-> RLHF is RL, so I\\u2019d be careful\", \"Line 266: PCAed <- this type of language is too informal; please be careful with this more generally. Projections is more appropriate.\", \"LRH: \\u201cfeature corresponds to a one-dimensional direction\\u201d -> networks represent features as linear directions. One-dimensional would imply a single neuron.\", \"Equation 3 is missing a sum before the first part.\"], \"rating\": \"6\", \"confidence\": \"3\"}"
]
} |
U0D8VgYnqG | NLP-EHUGBO: BRIDGING THE FAIRNESS GAP IN LANGUAGE MODELS FOR LOW-RESOURCE AFRICAN DIALECTS | [] | Despite advancements in language technologies, large language models (LLMs) continue to exclude low-resource languages, particularly African dialects like Ehugbo, a critically endangered variant of Igbo spoken by fewer than 150,000 people in Afikpo, Nigeria. Ehugbo’s linguistic complexity, featuring two additional alphabets beyond Igbo’s 36, exacerbates its marginalization, as existing models
fail to account for its unique structure. This exclusion perpetuates social and linguistic inequities, leaving speakers of such dialects without access to digital tools that could preserve their language and culture. This paper presents NLP-Ehugbo, a machine translation (MT) system designed to address this fairness gap. Using the only available parallel corpus, 1,021 Ehugbo-English sentences from the New Testament of the Bible, we evaluated and fine-tuned two state-of-the-art models, M2M100 (facebook/m2m100 418M) and NLLB (facebook/nllb-200-distilled-600M). Initial results were stark: M2M100 achieved a BLEU score of 1.2188, while NLLB scored only 0.0262. After fine-tuning, M2M100 improved to 16.1719, and NLLB achieved 20.4016, demonstrating the potential of adapting LLMs for low resource languages. Our findings reveal both promise and challenges. While fine-tuning significantly
improves performance, the lack of diverse datasets limits translation quality and reinforces the need for inclusive data collection practices. This work highlights the importance of community-driven approaches, as linguistic preservation cannot be achieved without the active involvement of native speakers. The significance of NLP-Ehugbo lies in its contribution to the fairness discourse in LLMs. By focusing on Ehugbo, we expose the systemic bias that excludes low-resource dialects and advocate for a more equitable approach to language technologies. This project not only advances the field of low-resource MT but also serves as a call to action for researchers and developers to prioritize linguistic
diversity, ensuring that no language is left behind in the digital age. | [
"low resource languages",
"African Languages",
"LLM for African languages",
"Small Language Models"
] | Reject | https://openreview.net/pdf?id=U0D8VgYnqG | https://openreview.net/forum?id=U0D8VgYnqG | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"jGmMrb3XXv",
"W0V5pFY2oy",
"FdtizAzSHJ",
"7zKRs56D6f"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740882910556,
1740202620314,
1741055127359,
1740908882732
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission45/Reviewer_nQeN"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission45/Reviewer_yGRw"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission45/Reviewer_AGk3"
]
],
"structured_content_str": [
"{\"title\": \"NLP-Ehugbo -- Needs more details about experimental results\", \"review\": \"## Contribution\\nIntroducing a machine translation system for Ehugbo-English via fine-tuning existing LLMs. \\n## Strengths\\n- There is an ethical importance to increasing the prevalence of marginalized languages in LLM training\\n- Fine tuning results show promising improvements from the baseline\\n\\n## Weaknesses\\n- Many of the training details could be placed in an appendix, or shown via a graph/plot instead of a paragraph description in the results\\n- Uses BLEU as only evaluation criteria, although BLEU can be quite brittle and non-representative\\n- Doesn't include any example translation sentences vs. ground truth (hard to grasp improvement just based on the delta in a single number)\\n\\n## Minor Edits\\n- Period missing at the end of page 2\\n\\n## Questions\\n- Given the sparsity of Ehugbo-specific data, did you consider using other Igbo-dialect datasets in addition to Ehugbo for finetuning, or using some of the Igbo datasets? It would be interesting to see whether finetuning even with those datasets which, although not the same as Ehugbo, are more similar than the majority of the data that these LLMs are trained on, would improve performance for Ehugbo as well. \\n\\n## Overall Thoughts\\nThis paper is interesting, and addresses an important dearth in LLM training. However, the paper would benefit greatly from more detailed analysis of results beyond just reporting the BLEU score and the training details.\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"NLP-EHUGBO finetunes multilingual machine translation models with small corpuses of available text written in Ehugbo.\", \"review\": [\"**Strengths:**\", \"**Valuable Topic:** Addresses the gap in machine translation for low-resource languages, specifically Ehugbo.\", \"**Fairness Focus:** Highlights the social and linguistic inequities stemming from underrepresentation in language technologies.\", \"**Weaknesses:**\", \"**Limited Dataset:** Relies on a small, domain-specific corpus (1,021 New Testament sentences) that fails to capture Ehugbo\\u2019s full linguistic diversity.\", \"**Narrow Evaluation:** Depends solely on BLEU scores without incorporating human assessments or additional metrics in other machine translation works. They also evaluate on the same dataset they finetune on.\", \"**Methodological Simplicity:** This work offers minimal innovation beyond standard fine-tuning of existing pre-trained models.\", \"**Insufficient Analysis:** The work does not thoroughly address how Ehugbo\\u2019s unique alphabets affect OOD model performance or the impact of the domain-specific corpus on other evaluation.\"], \"rating\": \"2\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The paper addresses the fairness gap of LLM in low resource languages like Ehugbo. The authors fine-tuned the pre-trained Ehugbo-English models by expanding the dataset.\", \"review\": \"Strengths:\\n1. The paper address an important issue of culturally sensitive and fair LLMs for low-resource languages\\n2. The authors have expanded the dataset and fine-tuned the pre-trained models for machine translation.\", \"weakness\": \"1. The main contribution is the dataset the authors have not discussed the quality of the dataset and the dataset creation itself.\\n2. The authors did not compare the fine-tuning on the dataset with any SOTA transfer learning approaches or other methods.\\n3. The novelty is clearly missing in the paper\\n4. The authors may experiment on low resource language learning techniques such as back-translation,unsupervised data augmentation etc\", \"rating\": \"4\", \"confidence\": \"3\"}"
]
} |
TvSkPlDTVw | AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors | [
"You-Ming Chang",
"Chen Yeh",
"Wei-Chen Chiu",
"Ning Yu"
] | Deep generative models can create remarkably photorealistic fake images while raising concerns about misinformation and copyright infringement, known as deepfake threats. Deepfake detection technique is developed to distinguish between real and fake images, where the existing methods typically learn classifiers in the image domain or various feature domains. However, the generalizability of deepfake detection against emerging and more advanced generative models remains challenging. In this paper, being inspired by the zero-shot advantages of Vision-Language Models (VLMs), we propose a novel approach called AntifakePrompt, using VLMs (e.g., InstructBLIP) and prompt tuning techniques to improve the deepfake detection accuracy over unseen data. We formulate deepfake detection as a visual question answering problem, and tune soft prompts for InstructBLIP to answer the real/fake information of a query image. We conduct full-spectrum experiments on datasets from a diversity of 3 held-in and 20 held-out generative models, covering modern text-to-image generation, image editing and adversarial image attacks. These testing datasets provide useful benchmarks in the realm of deepfake detection for further research. Moreover, results demonstrate that (1) the deepfake detection accuracy can be significantly and consistently improved (from 71.06% to 92.11%, in average accuracy over unseen domains) using pretrained vision-language models with prompt tuning; (2) our superior performance is at less cost of training data and trainable parameters, resulting in an effective and efficient solution for deepfake detection. | [
"Vision-Language model",
"deepfake detection",
"visual question answering",
"prompt tuning"
] | Accept | https://openreview.net/pdf?id=TvSkPlDTVw | https://openreview.net/forum?id=TvSkPlDTVw | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ZFrOkUENBe",
"Q4Lg9p4QMl"
],
"note_type": [
"decision",
"official_review"
],
"note_created": [
1741104391739,
1740405646717
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission2/Reviewer_uNMu"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"*AntifakePrompt* is a deepfake detection method that uses vision-language models and prompt tuning to distinguish fake images across various generative models. This approach improves accuracy and generalizability, outperforming traditional methods. Despite its strengths, the study is limited by the use of diffusion-based fake image datasets, which could introduce bias, and lacks a detailed explanation of the impact of prompt tuning on performance improvements.\", \"review\": \"**Summary**\\n\\nThe paper introduces AntifakePrompt, a deepfake detection method leveraging vision-language models and prompt tuning. The approach frames deepfake detection as a visual question-answering task, enhancing accuracy by using pretrained VLMs like InstructBLIP. Through prompt tuning, the model achieves high performance on both seen and unseen deepfake data, improving detection capabilities with fewer training resources\\n\\n**Strength**\\n\\n1. It formulates deepfake detection as a visual question-answering task and leverages vision-language models with prompt tuning, which is an effective strategy for improving generalizability and accuracy.\\n \\n2. The proposed method consistently outperforms traditional deepfake detection methods and state-of-the-art approaches across a wide range of datasets, including those from emerging generative models.\\n\\n3. The model achieves high accuracy with significantly fewer training parameters and data, making it a cost-effective solution for deepfake detection compared to other models that require extensive fine-tuning.\\n\\n**Weakness**\\n\\n1. This study evaluates fake image datasets generated exclusively by diffusion-based models, which may introduce bias. \\n2. The paper lacks a comprehensive analysis of why tuning a single word in the prompt within the VLM yields significant performance improvements.\", \"rating\": \"6\", \"confidence\": \"4\"}"
]
} |
TeokuqNJUh | Toward Trustworthy Neural Program Synthesis | [] | We develop an approach to estimate the probability that a program sampled from
a large language model is correct. Given a natural language description of a programming problem, our method samples both candidate programs as well as candidate predicates specifying how the program should behave. This allows learning
a model that forms a well-calibrated probabilistic prediction of program correctness. Our system also infers the which predicates are useful to explain the behavior of the generated code, and humans preferred these in a human study over raw
language model outputs. Our method is simple, easy to implement, and maintains
state of the art generation accuracy results. | [
"LLM",
"code generation",
"trustworthy AI"
] | Reject | https://openreview.net/pdf?id=TeokuqNJUh | https://openreview.net/forum?id=TeokuqNJUh | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"rCJSp85mEq",
"mu5rQ7oGCJ",
"SNlpZDGKfv",
"0tS7O7zwuF"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740878047582,
1740724794667,
1740299946024,
1741081682334
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission10/Reviewer_At7S"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission10/Reviewer_TgoD"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission10/Reviewer_EY8D"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Paper incomplete and not reflective of SOA\", \"review\": \"Although the premise of the paper has promise, it appears this paper is in an unfinished state. The abstract has a typo, and the introduction appears to be unfinished.\\nAdditionally, the results reported are not meaningful given the current state of the art: HumanEval and MBPP are used instead of SWEBench, and the models used are davinci and cushman, which have both been deprecated for quite some time in favor for much better models.\", \"rating\": \"3\", \"confidence\": \"5\"}",
"{\"title\": \"Good direction but poor evals and lots of questions about generalizability\", \"review\": \"Love the idea but have questions about generalizability\\nThis work takes a look at code generation using LLMs but through the lens of formal methods, specifically specification formulation. The authors leverage older codex models to implement an approach where the LLMs are asked to complete programs, input-output specifications, and relation specifications (akin to fuzzing test cases). They train a single layer classifier to determine if the generated outputs are probably correct.\", \"pros\": [\"A novel idea which blends generative AI with program synthesis\", \"A two pronged approach to program synthesis is\", \"Good selection of metrics and their definitions were clearly articulated\", \"Good selection of a relevant dataset\", \"Good mix of metrics and human evals\"], \"cons\": [\"The work was rushed. The following are clues as to why:\", \"Grammatical errors through out the paper\", \"Some references are improperly used. $\\\\texttt{swebench}$ is listed as an LLM system to solve GitHub issues. It's not; it's a benchmark\", \"Security concerns are dropped in the conclusion but nothing was previously mentioned\", \"The classifier is trained over data samples from two datasets but generalizability past this was not evaluated. This implies, their method is not scalable and for each distribution, a classier trained over lots of samples would be needed.\", \"I don't understand why the following statement holds, \\\"logical relations could serve the long tail of novel tasks that the LLM cannot reliably predict outputs for\\\"\", \"An ablation into the learned weights of the classifier is missing. It would have been extremely insightful to see what components actually have the most indication for whether a solution is correct.\", \"Without clear ablations and generalizability results, it's hard to justify this paper which standards in contrast to peer reviewed results which directly combat this (see Syzygy published at LLM4Code 2025)\"], \"rating\": \"3\", \"confidence\": \"5\"}",
"{\"title\": \"Review\", \"review\": [\"The paper presents a novel approach to improve the trustworthiness of neural program synthesis. It introduces a system that estimates the probability of a generated program being correct, using both candidate programs and candidate specifications (predicates). This method allows for well-calibrated probabilistic predictions of program correctness. In addition, the paper emphasizes explainability by generating human-readable specifications that clarify program behavior, making the system more interpretable. The results show that the proposed method not only enhances the trustworthiness and transparency of the generated programs but also maintains state-of-the-art generation accuracy. The approach addresses issues of calibration, explainability, and accuracy, providing a robust framework for program synthesis using large language models.\", \"Strengths\", \"The method provides calibrated probabilistic predictions of program correctness, improving the reliability of generated programs.\", \"By generating human-readable specifications, the approach enhances the interpretability of neural program synthesis, making the system more transparent.\", \"The framework maintains high generation accuracy while addressing calibration and explainability, setting a new benchmark in program synthesis.\", \"Weaknesses\", \"The system's reliance on generating human-readable specifications and probabilistic predictions adds complexity to the program synthesis process.\", \"While effective for certain tasks, the approach may face scalability issues when applied to more complex or larger programs.\", \"The method's performance relies on the availability and accuracy of candidate specifications (predicates), which may limit its applicability in some scenarios.\"], \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"comment\": \"This works is a bit rushed with several errors and lack of substantial experiments. The reviewers have provided several suggestions to strengthen the paper.\", \"title\": \"Paper Decision\"}"
]
} |
TctNv27aKp | Reliable and Efficient Amortized Model-based Evaluation | [
"Sang T. Truong",
"Yuheng Tu",
"Percy Liang",
"Bo Li",
"Sanmi Koyejo"
] | Comprehensive evaluations of language models (LM) during both development and deployment phases are necessary because these models possess numerous capabilities (e.g., mathematical reasoning, legal support, or medical diagnostic) as well as safety risks (e.g., racial bias, toxicity, or misinformation). The average score across a wide range of benchmarks provides a signal that helps guide the use of these LMs in practice. Currently, holistic evaluations are costly due to the large volume of benchmark questions, making frequent evaluations impractical. A popular attempt to lower the cost is to compute the average score on a subset of the benchmark. This approach, unfortunately, often renders an unreliable measure of LM performance because the average score is often confounded with the difficulty of the questions in the benchmark subset. Item response theory (IRT) was designed to address this challenge, providing a reliable measurement by careful controlling for question difficulty. Unfortunately, question difficulty is expensive to estimate. Facing this challenge, we train a model that predicts question difficulty from its content, enabling a reliable measurement at a fraction of the cost. In addition, we leverage this difficulty predictor to further improve the evaluation efficiency through training a question generator given a difficulty level. This question generator is essential in adaptive testing, where, instead of using a random subset of the benchmark questions, informative questions are adaptively chosen based on the current estimation of LLM performance. Experiments on 22 common natural language benchmarks and 172 LMs show that this approach is more reliable and efficient compared to current common practice.\footnote{Code: github.com/sangttruong/reeval} | [
"Model Evaluation",
"Amortization",
"Adaptive Testing"
] | Accept | https://openreview.net/pdf?id=TctNv27aKp | https://openreview.net/forum?id=TctNv27aKp | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"qxU4VEQJwP",
"ioJxbsybBm",
"iTOegr93iZ",
"RVeydtHrIO"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1739754402258,
1741076323726,
1740849005986,
1740945699148
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission106/Reviewer_VQTd"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission106/Reviewer_trzX"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission106/Reviewer_uZHd"
]
],
"structured_content_str": [
"{\"title\": \"Review of Reliable and Efficient Amortized Model-based Evaluation\", \"review\": \"Summary:\\nThe paper introduces a novel application of Item Response Theory (IRT) to generative model evaluation, which enables reliable and test-invariant scoring across diverse benchmarks. It proposes an amortized calibration method that predicts item difficulty from textual embeddings, significantly reducing the cost and complexity of recalibration. Additionally, the work develops a conditional item generator using a large language model to create new test items with targeted difficulty levels, supporting efficient adaptive testing and continual item bank expansion.\", \"strengths\": \"\", \"very_well_motivated\": \"introduces an IRT-based, scalable framework that significantly improves reliability, reduces costs, and enables continuous, adaptive monitoring of evolving language models.\", \"conditional_item_generation\": \"A specialized LLM produces items targeted to specific difficulty levels, enabling a large and diverse question bank that supports efficient adaptive testing and replenishment of overused items.\", \"extensive_empirical_validation\": \"They evaluate on 25 datasets with 184 large language models, showing that the proposed approach is both reliable (robust to test set shifts) and efficient (uses fewer queries to achieve the same reliability).\\n\\nWeaknesses/Questions: \\nAlthough amortized calibration reduces the cost, some ongoing calibration of new tasks or drastically changed models might still be necessary. The paper could expand on best practices for deciding when re-checking question difficulty is required.\", \"minor_comments\": \"At least during the review period, the code-sharing link sends this reviewer to empty python files.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"comment\": \"This paper presents a novel application of Item Response Theory (IRT) for generative model evaluation, enabling reliable, test-invariant scoring while reducing cost and complexity through amortized calibration. It also introduces a conditional item generator using LLMs for adaptive testing, though its accessibility could be improved by clarifying psychometric concepts.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"This paper proposes an Item Response Theory (IRT)-based framework for evaluating generative models, combining amortized calibration and conditional question generation to reduce evaluation costs while improving reliability.\", \"review\": \"## Quality & Clarity:\", \"the_paper_addresses_a_critical_challenge_in_generative_ai\": [\"the prohibitive cost and test-set sensitivity of evaluations. The IRT framework is technically rigorous, with clear explanations of calibration (EM algorithm) and adaptive testing (Fisher information maximization). Figures 1\\u20133 effectively visualize the response matrix and adaptive item selection. However, \\u00a73.1 (Background) assumes familiarity with psychometrics, which may hinder accessibility.\", \"## Originality:\", \"While IRT has been used in NLP (Lalor et al., 2019; Maia Polo et al., 2024), this work innovates by:\", \"Introducing amortized calibration via content-based difficulty predictors (e.g., BERT embeddings \\u2192 MLP)\", \"Automating item bank expansion via GPT-4-finetuned conditional generators (targeting specific difficulty levels)\", \"This bypasses the linear scaling $O(N)$ calibration cost of traditional IRT.\", \"## Significance:\", \"Results on 184 LLMs and 25 benchmarks (HELM, AIR-Bench) show:\", \"50\\u201382% fewer questions needed vs. random subsetting\", \"0.92 Spearman correlation between IRT ability scores and full-test accuracy\", \"Amortized calibration reduces compute by 98% (3 GPU hrs vs. 150 hrs for EM)\", \"### Pros:\", \"Novel integration of IRT with modern ML techniques (amortization, LLM generators)\", \"Large-scale empirical validation across diverse models/tasks\", \"Open-source implementation facilitates adoption\", \"Theoretically grounded adaptive testing via: $$I(\\\\theta_t; \\\\hat{z}^j) = p(y=1 \\\\mid \\\\theta_t, \\\\hat{z}^j)(1 - p(y=1 \\\\mid \\\\theta_t, \\\\hat{z}^j))$$\", \"### Cons:\", \"Limited testing on non-NLP tasks (e.g., image generation)\", \"Item generator may inherit biases from GPT-4\\u2019s training data\", \"Assumes static model abilities (ignores iterative finetuning)\", \"No comparison to active learning baselines (e.g., BALD)\"], \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Reviewed at ICLR '25 confernce.\", \"review\": \"A good paper on improving evaluation pipelines for generative models. The paper has received detailed reviews at the ICLR 2025 conference and I don't have anything more to add.\", \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
SdFEW9qKBo | Evaluating AI Safety in Polish: An Automated Red-Teaming Approach | [] | The development of multilingual large language models (LLMs) presents challenges in evaluating their safety across all supported languages. Enhancing safety in one language (e.g., English) may inadvertently introduce vulnerabilities in others. To address this issue, we propose a methodology for the automatic creation of red-teaming datasets for safety evaluation, categorizing them by risk type and attack style. We apply our methodology to the Polish language, highlighting the disparity between focusing on English and on Polish when generating safe outputs. | [
"rainbow teaming",
"safety",
"llms",
"red teaming"
] | Reject | https://openreview.net/pdf?id=SdFEW9qKBo | https://openreview.net/forum?id=SdFEW9qKBo | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"owpUlRk29n",
"ksC0ldjfqb",
"di861XUdXJ",
"HcFZXhnT3L"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740845402808,
1741054978006,
1740332726587,
1740466967585
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission27/Reviewer_enog"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission27/Reviewer_Ne3E"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission27/Reviewer_L6d4"
]
],
"structured_content_str": [
"{\"title\": \"Evaluating AI Safety in Polish: An Automated Red-Teaming Approach\", \"review\": \"In Evaluating AI Safety in Polish: An Automated Red-Teaming Approach, the authors outline a methodology for generating harmful and non-harmful prompts for large language models to test their robustness against Polish prompts. The authors do a good job of clearly describing the process for generating red-teaming datasets. While the evaluation section is also clear, there is limited discussion on the implications or significance of the results. One path to make the paper stronger could be comparing the Polish results against English results (since the authors make the claim that models have generally already been subjected to English red-teaming)\\u2014in the abstract, there is a promise to \\u201c[highlight] the disparity between focusing on English and on Polish when generating the safety outputs\\u201d but it seems this promise is not fulfilled in the body of the paper. Additionally, it is unclear what one would do with the ASR and FRR results. Further discussion on that subject could also strengthen the paper.\", \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Important topic, but needs to clarify the contributions and spend more space interpreting the results.\", \"review\": \"The paper introduces a methodology that automates the creation of red-teaming datasets for safety evaluations. It focuses specifically on using this framework in the context of the Polish language, given that \\u201cEnhancing safety in one language (e.g., English) may inadvertently introduce vulnerabilities in others.\\u201d This mission is important and timely, and I appreciate the use of 14 different harm categories and 10 different attack styles. In general, I think the approaches outlined in this paper are on the right track, but I felt the reporting was not sufficiently strong for publication.\\n\\nWith a significant re-write and re-emphasis on reporting (a) why this technique is unique, (b) why other existing datasets (e.g., SafetyBench, WalledEval) or tools (e.g., PyRIT) could not be translated or implemented in other languages to achieve the goals outlined by this paper, and (c) interpreting the results would make this work publishable.\", \"additional_comments\": [\"Like the comparative approach between models and postulation about why Llama and PLLUM models would differ in their ASR rates. Would like more of this!\", \"Appreciate the variety in harm type but would be interested in some rationale for choosing these categories. Are they common use cases? Do they conflict with model T&C or existing regulations?\", \"Also appreciate that multiple attack styles were included. However, there was again no rationale for including these styles (beyond that another author used them), and more importantly, no interpretation of results across style types. Why include this variation if the (sometimes large) differences between the styles are not discussed? This is an example of where more interpretation and presenting some hypotheses that can help explain the pattern of results would have been useful.\", \"I have some concerns about how the prompts were reviewed. Why would one reviewer be assigned the harmful prompts and another reviewer be assigned the non-harmful prompts? The prompts should have been randomly split, and then each reviewer assigned 66% of the prompts. This would ensure that (a) the reviewers are being impacted by the knowledge that the prompt should be harmful/non-harmful, and (b) allow a reliability metric to be computed for the 33% of prompts that both reviewers evaluated.\", \"The harmful and non-harmful generation prompts were very close to being matched (which is wonderful!) but the harmful prompts included an extra sentence at the beginning: \\u201cRefer to a specific act. Return only the prompt, do not write \\\"prompt,\\\" do not comment, do not make excuses, I beg you!\\u201d The careful attention to matching, but then including this divergent sentence did not make sense to me.\", \"A key rationale for conducting this work is because: \\u201cEnhancing safety in one language (e.g., English) may inadvertently introduce vulnerabilities in others\\u201d \\u2026 \\u201c[the current] English-centric approach may leave multilingual LLMs vulnerable in other languages. This is particularly concerning for languages underrepresented in safety training data, such as Polish.\\u201d I agree with this point, but feel some evidence should have been included to support this idea.\"], \"unanswered_questions\": [\"Why were styles typically transferred more frequently for the non-harmful prompts?\", \"Why were certain categories less \\u2018consistent\\u2019 (e.g., S2: non-violent crimes and S14: code interpreter abuse for Llama guard)?\", \"How much does the language of the base prompt impact the results? Especially curious what would happen if the \\u201cI beg of you!\\u201d sentence was removed from the harmful prompts.\", \"How does this work compare to other safety benchmarks (e.g., SafetyBench, WalledEval)?\", \"How is your work different from taking existing methods and using them in other languages? What are the unique benefits?\", \"If you were safety testing a model, would you prioritize a low ASR or FRR? How can one balance this trade-off?\", \"Ultimately, I understand that there are significant length restrictions, but reporting the results in 158 words is difficult to understand when there were so many paragraphs that could have been combined and lines with hanging words and phrases. That space could have been used to beef up the reporting and clarify the contributions of this work. Too many of the key findings were relegated to the appendix.\"], \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"title\": \"The paper constructs a dataset for the Polish language to test the safety of LLMs. The final experimental results indicate that there is significant disagreement among different models regarding whether the prompts are safe or not.\", \"review\": \"Strengths:\\n1. The paper addresses an important issue: the safety evaluation of LLMs in multilingual support.\\n2. The paper focuses on a less commonly studied language, Polish, and constructs its own dataset, which, if understood correctly, consists of 473 harmful prompts and 500 non-harmful prompts, totaling 973 test samples.\\n3. The paper conducts experiments on multiple open-source models.\", \"weaknesses\": \"1. What is the innovation in the pipeline from data generation, manual review, to final model evaluation compared to existing work?\\n2. In the experimental section, we observe significant deviations in test results across different models for the same input data. For example, some models have an ASR of 63.8%, while others have only 0.56%. Does this indicate a flaw in our method design? Or should the authors provide a deeper analysis of why such large discrepancies occur across models?\\n3. There are some obvious typos in the paper, such as \\\"71.7%\\\" being written as \\\"71,7%\\\" in Section 3.2.\", \"suggestions\": \"1. The dataset generation method involves using LLMs for automatic generation followed by manual review. Given that LLMs have over a 90% probability of generating the desired dataset, consider scaling up the LLM-generated data and eliminating the manual review process.\\n2. The paper uses a simple binary classification metric (safe & unsafe) to evaluate LLM safety. Could this be the reason for the significant differences in results across models? For example, to increase reliability, could the test data be modified into group-based datasets? For instance, grouping 5 datasets together and setting the group as \\\"safe\\\" if 4 out of 5 (depending on the strictness of detection) are classified as safe. The same method could then be applied when testing with other models to avoid bias from individual data points.\\n3. Another possibility is that the differences in results across models are due to inherent differences in the open-source models themselves. For example, what Llama-Guard-3-8B considers \\\"safe\\\" has a 63.83% ASR in the Bielik-7B-Instruct-v0.1 model. Additionally, we observe that ASR and FRR metrics show opposite trends in some models. Does this suggest that some models are inherently more \\\"optimistic\\\" (lenient) while others are more \\\"pessimistic\\\" (cautious)? Could more experiments be conducted to validate this hypothesis?\", \"rating\": \"3\", \"confidence\": \"4\"}"
]
} |
SPlhZYuH9e | Red Teaming for Trust: Evaluating Multicultural and Multilingual AI Systems in Asia-Pacific | [
"Akash Kundu",
"Adrianna Tan",
"Theodora Skeadas",
"Rumman Chowdhury",
"Sarah Amos"
] | This paper presents the first multicultural and multilingual AI Safety Red Teaming Challenge focused on the Asia-Pacific region, conducted in November and December 2024. Red teaming, a critical method for evaluating the safety and robustness of AI systems, involves stress-testing models to uncover vulnerabilities, biases, and limitations. While traditionally performed by AI developers in Western-centric contexts, this study expands the scope by emphasizing cultural and linguistic nuances unique to East, Southeast, and South Asia. The challenge included 54 participants from nine countries, representing academic and research institutions, and involved an in-person event followed by a virtual component. The primary objective was to establish a baseline for AI performance across diverse cultural and linguistic contexts, addressing the demographic and cultural disparities often overlooked in existing AI evaluations. Our findings underscore the necessity of addressing both universal and region-specific risks in AI, paving the way for more equitable global AI adoption. | [
"AI Red Teaming",
"Multilingual Bias Detection",
"Fairness in LLMs"
] | Accept | https://openreview.net/pdf?id=SPlhZYuH9e | https://openreview.net/forum?id=SPlhZYuH9e | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ztj0CBYqP7",
"xeJ11tl4dG",
"PNDUMt7DfL"
],
"note_type": [
"decision",
"official_review",
"official_review"
],
"note_created": [
1741109575508,
1739808850513,
1740419264269
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission62/Reviewer_WrKJ"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission62/Reviewer_MzEj"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"comment\": \"The reviewers strongly disagree on the paper\\u2019s merits. One highly positive review (8) highlights the novelty, significance, and well-structured methodology, while another critical review (4, confidence 2) argues that the study lacks novelty and depth. Given the high confidence of the positive review and the low confidence of the negative one, the concerns raised should be weighed accordingly. The paper would benefit from a stronger related work discussion, clearer methodology framing, and deeper comparisons to existing benchmarks. Despite these limitations, this work addresses a complex and underexplored problem, making it a valuable addition to the workshop.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Interesting work with substantial shortcomings\", \"review\": \"General:\\n- I think this paper uses an incorrect font type and size\", \"strengths\": [\"strong human-in-the-loop experiment with human subjects\"], \"weaknesses\": [\"this paper reads like a report. While interesting insights, it misses an in-depth related work discussion and a stronger motivation.\", \"I am also missing a discussion on culture vs. language. Speaking a different language does not necessarily induce a different culture and vice versa.\", \"the proposed methodology is not very strong, nor novel. It is a very simple setup\", \"there is already quite some work on this, see XSafety and M-ALERT benchmarks, which have basically shown the same findings already.\"], \"rating\": \"4\", \"confidence\": \"2\"}",
"{\"title\": \"Important contribution that would benefit from more procedural details and a clearer comparative conclusion\", \"review\": \"**Quality** (8): this study conducts a multi-lingual, multi-cultural red-teaming exercise to measure to what extent four LLMs exhibit biases in non-Western contexts. The motivation is well founded and this study addresses an important gap in AI safety with respect to both research and practice. While the writing and methodology is relatively clear, there is room for more explicit indications of processes and conclusions.\\n\\n**Clarity** (8): The motivation and methodology for this study are relatively clear. The paper would be benefit from more clarity with respect to the human subjects and design, the annotation process, and the analysis/conclusions. In particular:\", \"human_subjects_and_design\": [\"In the paper body and/or in Appendix A1 - to understand implications and external validity of research, more details are needed about the demographics of participants, how they were recruited, how they were compensated, whether the points translated into an in-kind incentive, the time provided to them to play with models, their familiarity with models and previous use with LLMs, how many chances they were given to elicit biased responses, how many models each participant interacted with, whether they given access to outside tools while conducting this exercise?\", \"Authors might note in the paper body that instructions (A.2.7) give participants tips on how to elicit bias and as such the results in this paper aren\\u2019t representative of bias rates for average use (they may overestimate).\", \"Line 117: which models did each region receive? How were these chosen?\", \"Appendix A.2.3. what did participants actually see here? Why were model names broadly provided in the training but blinded in use? Why not blind in both?\", \"Lines 186 regarding \\u201cinstitutional guidelines,\\u201d did this study have an IRB? If so, could more details be provided?\", \"Authors might note in the paper body that instructions (A.2.7) give participants tips on how to elicit bias and as such the results in this paper aren\\u2019t representative of bias occurrence rates for average uses (they may overestimate).\", \"Line 136: what were differences in in-person and online formats? How were each conducted? When? Which types of participants were involved in each?\"], \"annotation_process\": [\"Line 139: \\u201cthe platform automatically flagged potentially harmful prompts.\\u201d Did experts (line 123) only launch the two-stage review process for flagged prompts? If so, can the authors provide validation on the strength of the classifier?\", \"It seems important that the authors explicitly acknowledge that the \\u201cstandardized rubric\\u201d is still quite subjective in that bias is still binarily determined by the annotator. The standardization comes in only with respect to how the prompt count maps to points; whether an exploit is unique; and how exploits aggregate across bias categories.\", \"Line 123: could the authors provide more information on what \\u201cexpert\\u201d means in each context? How were the annotators recruited, what were their relevant demographics, and what constitutes cultural and linguistic proficiency?\", \"Analysis/conclusions:\", \"5.3 This section seems to diverge from the motivation of the paper. What is the hypothesis underlying this section? What larger point are these graphs aiming to make about Western v non-Western biases? Could this be made more explicit? Why were these particular examples chosen? Do similar trends exist for other biases and countries in the dataset? For Figure 1, if the implication is that these same results would not be observed in a western context, for example, could you show a Western reference point that contextualizes this gap?\", \"5.2 how many submissions overall? Similarly, line 229 - why is percentage of *flagged* submissions rather than overall submissions?\", \"Table 2 \\u201cKey Focus Areas\\u201d \\u2014 were these focus areas by design or is this what focus ended up being, without instruction?\", \"Table 4/5: it would be interesting to see data by model and country; by # of turns; and by paired results (the same prompt in English vs non-english). Does \\u201csuccess rate\\u2019 refers to biased responses out of flagged responses; or biased responses out of all responses? What does \\u201cDisp.\\u201d Mean?\"], \"misc\": \"- Line 358 - could the authors clarify whether this is a limitation of the study methodology or of language models more generally? If the former, could the link be made more explicit?\\n- Line 973 - what does \\u201cdomain experts\\u201d mean here?\\n\\n**Originality** (8): this is a well-structured documentation of red-teaming in a novel context and adds original and scientific evidence to literature on bias of LLMs. Its scientific strength would be increased in addressing the above points on design, annotation, and analysis. \\n\\n**Significance** (8): the paper shows us that bias exists in non-Western/English contexts for LLMs. However, to really drive home the significance of this research, the authors could consider adding clear reference points and cleaner comparative analysis. For example, is incidence of bias in these cases higher than in similar red-teaming exercises in Western contexts? When analyzing *pairs* of prompts, is bias higher in non-English versions than in English versions?\", \"rating\": \"8\", \"confidence\": \"4\"}"
]
} |
RQjUpeINII | Top of the CLASS: Benchmarking LLM Agents on Real-World Enterprise Tasks | [
"Michael Wornow",
"Vaishnav Garodia",
"Vasilis Vassalos",
"Utkarsh Contractor"
] | Enterprises are increasingly adopting AI agents based on large language models (LLMs) for mission-critical workflows. However, most existing benchmarks use synthetic or consumer-oriented data, and do not holistically evaluate agents on operational concerns beyond accuracy (e.g. cost, security, etc.). To address these gaps we propose CLASSIC, a novel benchmark containing 2,133 real-world user-chatbot conversations and 423 workflows across 7 enterprise domains including IT, HR, banking, and healthcare. We evaluate LLMs across five key metrics -- Cost, Latency, Accuracy, Stability, and Security -- on a multiclass classification task that requires the model to select the proper workflow to trigger in response to a user message. Our dataset of real-world conversations is challenging, with the best LLM achieving an overall accuracy of only 76.1%. Across all five metrics, we find significant variation in performance -- for example, Gemini 1.5 Pro only refuses 78.5% of our jailbreak prompts compared to Claude 3.5 Sonnet's 99.8%, while GPT-4o costs 5.4x more than the most affordable model we evaluate. We hope that our benchmark helps to increase trust in LLM applications by better grounding evaluations in real-world enterprise data. We open source our code and data, and welcome contributions from the community. | [
"agents",
"llms",
"benchmarks",
"enterprise",
"conversational",
"workflows"
] | Accept | https://openreview.net/pdf?id=RQjUpeINII | https://openreview.net/forum?id=RQjUpeINII | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"sXgUb4G2Vg",
"pPwfIHoTwz",
"nJStzpFo2w",
"AKwiExDEAS"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740876561079,
1740907312528,
1740354176580,
1740876905381
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission109/Reviewer_h5PR"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission109/Reviewer_GiNa"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission109/Reviewer_cBfm"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Good paper with some methodological limitations\", \"review\": \"This paper introduces CLASSIC, a benchmark evaluating LLM agents on enterprise tasks using actual chatbot interactions across cost, latency, accuracy, stability, and security metrics. It reveals performance trade-offs among leading LLMs and identifies security vulnerabilities and inconsistencies absent from synthetic benchmarks.\", \"strengths\": [\"Practical focus on real-world enterprise needs with a comprehensive five-metric evaluation framework\", \"Dataset derived from genuine enterprise interactions across multiple domains\", \"Thorough cross-model comparisons highlighting key deployment limitations\", \"Valuable jailbreak assessment revealing practical evaluations of SoTA LLMs\"], \"weaknesses_and_suggestions\": [\"Single-vendor data source (Aisera) potentially limiting generalizability\", \"Security evaluation restricted to jailbreak prompts, neglecting other critical vulnerabilities\", \"Narrow focus on workflow selection rather than multi-step reasoning or document retrieval\", \"Insufficient analysis of the causes behind observed performance instability\", \"I still believe this paper should be accepted, though, because despite limitations in dataset diversity and security assessment scope, this paper makes a significant contribution to an application of LLM trust, and I believe it will be an interesting contribution to the conference.\"], \"rating\": \"7\", \"confidence\": \"2\"}",
"{\"title\": \"Paper is easy to follow and is nicely written. The uploaded subset is quite small.\", \"review\": \"### Summary\\n\\nThis paper provides a benchmark for evaluating LLMs on workflow classification. Multiple metrics are provided including cost, latency, accuracy and stability, with security being used to evaluate jailbreak attacks.\\n\\n### Strengths\\n1. Nicely written.\\n2. The figures are quite instructive.\\n\\n### Weaknesses\\n\\n1. The uploaded subset contains only 70 out of the 1793 conversations are reported. Though it is understandable that the complete dataset is under review, the given sample is hardly 4% of the overall dataset. I think this is not representative of the whole dataset.\\n2. The Latency metric does not make sense. Is the objective to check how much time the LLM is taking to respond? If so, then the latency of the user network also comes into play. Moreover, latency is a specification of the model and has nothing to do with the dataset.\\n3. It would have been interesting to see the actual prompts given for jailbreaking. But it is not available.\", \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"title\": \"CLASSIC is a benchmark evaluating classification capabilities of agents triggering different workflows in enterprise domains.\", \"review\": [\"**Strengths:**\", \"**Real-World Data:** Utilizes a dataset of 2,133 genuine user-chatbot conversations and 423 workflows from seven enterprise domains, offering a more realistic evaluation environment compared to synthetic benchmarks.\", \"**Evaluation:** Assesses LLMs on a broad range of metrics (Cost, Latency, Accuracy, Stability, and Security), going beyond just accuracy.\", \"**Weaknesses:**\", \"**Task Specificity:** The benchmark focuses solely on a multiclass classification task (selecting the appropriate workflow), which may not capture the full spectrum of enterprise operational challenges. Many works have also found classification is largely solved with good in-domain data.\", \"**Cost Analysis Ambiguity:** The cost metric comparison could benefit from a more detailed breakdown to contextualize the trade-offs between different models. Agent trajectories could be useful to analyze.\", \"**Limited Multi-Turn Interaction:**\", \"*\\\"Most (85%) of the conversations in our dataset are single-turn dialogues.\\\"*\", \"This leads the benchmark to not necessarily evaluate agents in the sense of multi-turn decision-making, but rather the classification capabilities of LLMs in enterprise workflows.\", \"This distinction and the definition of \\\"agent\\\" being used should be more clearly articulated.\", \"This is a useful benchmark for certain enterprise use cases but does not necessarily tread much new ground compared to previous agent benchmarks.\"], \"rating\": \"6\", \"confidence\": \"5\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
QcmEb490bK | Self-Ablating Transformers: More Interpretability, Less Sparsity | [
"Jeremias Lino Ferrao",
"Luhan Mikaelson",
"Keenan Pepper",
"Natalia Perez-Campanero"
] | A growing intuition in machine learning suggests a link between sparsity and interpretability. We introduce a novel self-ablation mechanism to investigate this connection ante-hoc in the context of language transformers. Our approach dynamically enforces a k-winner-takes-all constraint, forcing the model to demonstrate selective activation across neuron and attention units. Unlike post-hoc methods that analyze already-trained models, our approach integrates interpretability directly into model training, promoting feature localization from inception. Training small models on the TinyStories dataset and employing interpretability tests, we find that self-ablation leads to more localized circuits, concentrated feature representations, and increased neuron specialization without compromising language modelling performance. Surprisingly, our method also decreased overall sparsity, indicating that self-ablation promotes specialization rather than widespread inactivity. This reveals a complex interplay between sparsity and interpretability, where decreased global sparsity can coexist with increased local specialization, leading to enhanced interpretability. To facilitate reproducibility, we make our code available at https://github.com/keenanpepper/self-ablating-transformers. | [
"Mechanistic Interpretability",
"Sparsity",
"Language Models",
"Transformer"
] | Accept | https://openreview.net/pdf?id=QcmEb490bK | https://openreview.net/forum?id=QcmEb490bK | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"u2YNE9nUHk",
"a4NqaK4uuB"
],
"note_type": [
"official_review",
"decision"
],
"note_created": [
1740916472361,
1741182136177
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission68/Reviewer_3sRk"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"This paper introduces a self-ablation training mechanism for transformer using k-winner-takes-all during training\\u2014to show that focused specialization, rather than global sparsity, can enhance interpretability at a modest cost in perplexity and other core metrics.\", \"review\": \"This paper designs and evaluate a training mechanism that enforces selective activation in Transformer models. The authors introduce a self-ablation procedure based on a $k$-winner-takes-all method that only remains active during training, so that the final inference-time architecture is unchanged. The text is mostly clear, although the descriptions of local vs. global ablation could use a bit more elaboration. Nonetheless, the methodology is laid out in enough detail for another researcher to attempt a replication.\\n\\nThe paper\\u2019s main strength lies in its careful demonstration that pushing for \\u201cfocused specialization,\\u201d rather than overall massive sparsity, can lead to more interpretable internal circuits. This is a useful perspective considering the interpretability research community place heavy emphasis on sparsity as a stand-in for interpretability itself. They show that some tasks, such as Indirect Object Identification on a synthetic dataset (TinyStories), can be handled by fewer internal connections once ablation is applied, as measured by Automatic Circuit Discovery. The results consistently indicate that forced ablation improves interpretability signals while modestly increasing perplexity. This more or less captures the central significance: it suggests interpretability need not strictly come from turning off large portions of neurons, but instead from targeted local gating. On the other hand, experiments are confined to TinyStories. The dataset is extremely small and might not reveal the full complexity of real-world language tasks, so the approach\\u2019s generalizability remains somewhat open. Although for a workshop submission, the paper can be seen as promising early results worthy of being shared with the community.\\n\\nIn terms of quality, I see no major flaws in data analysis or rigour. The clarity is fair, though a few parts could be expanded for a more thorough exposition of certain hyperparameters or complexities in global ablation. The originality is adequately demonstrated. The significance is moderate but could be higher if tested on bigger models or more challenging benchmarks (eg, unlearning as the authors point out). Still, it is a meaningful step toward bridging structured interpretability with normal transformer training, and more importantly, provide a perspective away from prioritizing sparsity at the cost of other metrics.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
QSUP3NJBbM | PATTERNS AND MECHANISMS OF CONTRASTIVE ACTIVATION ENGINEERING | [
"Yixiong Hao",
"Ayush Panda",
"Stepan Shabalin",
"Sheikh Abdur Raheem Ali"
] | Controlling the behavior of Large Language Models (LLMs) remains a significant
challenge due to their inherent complexity and opacity. While techniques
like fine-tuning can modify model behavior, they typically require extensive
computational resources. Recent work has introduced a class of contrastive
activation engineering (CAE) techniques as promising approaches for steering
LLM outputs through targeted modifications to their internal representations.
Applied at inference-time with zero cost, CAE has the potential to introduce
a new paradigm of flexible, task-specific LLM behavior tuning. We analyze
the performance of CAE in in-distribution, out-of-distribution settings, evaluate
drawbacks, and begin to develop comprehensive guidelines for its effective
deployment. We find that 1. CAE is only reliably effective when applied to
in-distribution contexts. 2. Increasing the number of samples used to generate
steering vectors has diminishing returns at around 80 samples. 3. Steering vectors
are susceptible to adversarial inputs that reverses the behavior that is steered for.
4. Steering vectors harm the overall model perplexity. 5. Larger models are more
resistant to steering-induced degradation. | [
"LLMs",
"activation steering",
"representation engineering",
"controlled text generation",
"safety",
"alignment"
] | Accept | https://openreview.net/pdf?id=QSUP3NJBbM | https://openreview.net/forum?id=QSUP3NJBbM | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"bByGYviqIH",
"T36r8i6HYz",
"72jGodFReF",
"1axAu4GyA6"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1739690803761,
1740916765275,
1740856227056,
1740687518748
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission134/Reviewer_wFNu"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission134/Reviewer_vSFa"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission134/Reviewer_wYBC"
]
],
"structured_content_str": [
"{\"title\": \"Official Review of Submission134\", \"review\": [\"# Strengths\", \"**Relevant Topic**: Controlling LLM behavior at inference time is a crucial area of research for safety and alignment, and CAE has been a more promising approach.\", \"**Empirical Investigation**: Paper attempts a comprehensive systematic investigation of CAE, varying parameters like dataset size, steering strength, and model size.\", \"**Meaningful Validation Of Previous Work**: Sweeps across layers and steering strengths.\", \"**Focus on Out-of-Distribution (OOD) Generalization**: The paper explicitly addresses the crucial question of OOD performance, which is often overlooked. The creation of a new OOD evaluation dataset is a positive step, although the dataset itself needs more scrutiny.\", \"**Perplexity Analysis**: The attempt to quantify negative side effects of steering via perplexity is a good idea, addressing a critical aspect of controllability.\", \"# Weaknesses\", \"**Missing Citations**: L32, L34, L81, L108\", \"**Perplexity Analysis Vagueness**: The perplexity analysis (Section 6) is vague. How was the \\\"subset of texts in the Pile\\\" chosen? What constitutes a \\\"large change\\\" in likelihood?\", \"**Red-Teaming Results**: The red-teaming experiments are inconclusive.\", \"**Limited Model Evaluations**: Evaluations limited to Llama Family of Models: Would help if included evaluations for other model families with different architectures (eg Gemma)\", \"# Questions\", \"See Weaknesses\"], \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Review for \\\"PATTERNS AND MECHANISMS OF CONTRASTIVE ACTIVATION ENGINEERING\\\"\", \"review\": [\"**Summary**\", \"This paper provides an empirical study of Contrastive Activation Engineering (CAE) for steering large language models. The authors investigate the effectiveness of CAE, its limitations, and potential negative side effects.\", \"**Strengths:**\", \"**Comprehensive Empirical Analysis:** The paper conducts a range of experiments, varying dataset size, steering strength, and layers, providing a decent overview of CAE's performance under different conditions. For example, Figures 2-5 show sweeps across layers and steering strengths for different models and dataset sizes. Figure 6 explores the impact of the number of examples used to generate steering vectors.\", \"**Practical Focus:** The study considers real-world applicability, investigating out-of-distribution performance (Section 5), adversarial robustness (via EPO, Section 7), and the impact on perplexity (Section 6), which are relevant to deployment.\", \"**Out-of-Distribution Analysis:** The creation of the small evaluation dataset (described in Section 5) represents a useful and novel contribution. The dataset comprises 540 questions, spanning 9 target behaviors, to mimic real-world deployment scenarios.\", \"**Analysis of Samples for Steering Vectors:** The paper shows that performance improvement from using more samples begins to decrease at 55-89 samples (Figure 6).\", \"**Weaknesses:**\", \"**Broken Citations:** Many citations are rendered as \\\"?\\\". This makes it difficult to verify the claims. Examples include several citations in the related works (Page 2) like (Turner et al., 2024; ?).\", \"**Broken Intra-Paper Link:**\\u00a0There is at least one instance of a broken internal link. On page 2, the authors state, \\\"Successfully red-team steering vectors with evolutionary prompt optimization in ??, albeit they're unlikely to be observed naturally.\\\" referring to a section that is not properly linked.\", \"**Lack of Clarity on Choice of Experiments**: The paper doesn't effectively motivate some experimental designs and lacks justification for its parameters. For example, the choice of the Fibonacci sequence for sweeping sample sizes (page 4) seems arbitrary without further explanation.\", \"**Limited Novelty:** While the empirical study is extensive, the paper doesn't introduce fundamentally new techniques or theoretical insights. The core methodology (contrastive activation addition) is taken directly from prior work (Panickssery et al., 2024, explicitly acknowledged on page 3). The contribution is primarily in the breadth of the empirical evaluation, which while **valuable**, has **diminished impact due to the presentation**.\"], \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Comprehensive CAE Steering Experiments: Confirmation with Limited Novelty or Narrative\", \"review\": \"Summary: The paper presents a wide-range of experiments on CAE steering vectors.\", \"the_main_experimental_findings_are\": \"Section 4. In-Distribution Analysis\\nIdentify optimal layers for steering Llama 8B and 70B by sweeping across layers (average across behaviors of MWE dataset) \\nEvaluate how many training samples are needed for generate reliable steering vectors.\\n\\nSection 5. OOD Analysis\\nTest OOD performance of steering vectors by creating a dataset that has a distributional shift compared to the training dataset.\\nOOD fails - would expect left low and right high values. -> confirms findings from Tan et al\\n\\n\\nSection 6. Perplexity of Steering\\nTurner et al. find that steering for positive concepts reduces perplexity on positive samples and increases on negative samples..\\nThe paper finds that analogously steering for \\\"French\\\", increases perplexity on Dutch and decreases perplexity on French.\", \"section_7\": \"Red Teaming Steering vectors with EPO\\nPaper tests how adversarial inputs generated with EPO generate inputs that make steering vectors act in the opposite direciton. \\nWhich is interesting, but given findings fall short of being conclusive.\", \"strengths\": \"1. Good introduction into Contrastive Activation Engineering, relevant literature and methods.\\n2. Extensive Experiments. The paper runs many different experiments (layer sweeps, OOD, ID, Adversarial examples) that confirm previous findings or slightly extend them. This is valuable to confirm and strengthen existing results.\", \"weaknesses\": \"- 1. Marginal contributions, with small benefit over existing work. Most of the findings are in line with previous work, as stated in the paper. \\n- 2. No cohesive narrative. Findings that are interesting but not investigated in depth enough.\\nFocus on less results, create a cohesive narrative, expand these results. \\n\\n\\nCitation and Reference Issues\\n\\u2022\\tThere are several instances of \\u201c?\\u201d in place of citations. These citation failures should be resolved to ensure proper referencing and clarity.\", \"rating\": \"4\", \"confidence\": \"4\"}"
]
} |
PZnDZdkGsE | StochasTok: Improving Fine-Grained Subword Understanding in LLMs | [
"Anya Sims",
"Cong Lu",
"Klara Kaleb",
"Jakob Nicolaus Foerster",
"Yee Whye Teh"
] | Despite impressive performance, large language models (LLMs) still struggle with seemingly simple questions such as "How many r's are in 'strawberry'?" This limitation highlights that LLMs are unable to understand how humans `see' language. We attempt to address this by experimenting with stochastic tokenization schemes in which the same text may be tokenized into multiple possible token sequences. We find that using stochastic tokenization during pretraining dramatically alters the representations learned and allows LLMs to capture understanding of fine-grained spelling-level detail in addition to the structure learned with standard tokenization. We demonstrate this by showing that LLMs pretrained with standard deterministic tokenization cannot be fine-tuned to answer language-game type questions, whilst with the minimal addition of stochastic tokenization during pretraining, the corresponding LLMs perform near-perfectly. Crucially, these improvements are achieved without any performance drop on standard benchmarks or any additional training cost — the only change is a single simple, computationally cheap preprocessing step. Overall, our results suggest that embracing stochastic tokenization can help enable LLMs to better understand how humans perceive language. | [
"language models",
"pretraining",
"tokenization"
] | Accept | https://openreview.net/pdf?id=PZnDZdkGsE | https://openreview.net/forum?id=PZnDZdkGsE | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"XjaZRXjVys",
"P8MWMydPUQ",
"GuACDCaAHj",
"17t1e7Enmy"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1740924428461,
1740908803893,
1740901181239,
1740892101262
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission43/Reviewer_kzAR"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission43/Reviewer_UE7x"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission43/Reviewer_cNaf"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The approach is promising and outlined nicely.\", \"review\": \"### Summary\\n\\nThis paper provides a method for improving subword understanding of LLMs, based on the approach of stochastic tokenization. The method is computationally inexpensive, and provides said improvements.\\n\\n### Strength\\n\\n1. Paper is nicely written, and the overall flow of discussion is good.\\n2. The proposed method achieves significant gains in subword understanding task compared to standard training.\\n3. The computation cost of the method is minimal.\\n4. The performance on original benchmarks is not hindered, illustrating that stochastic tokenization is a promising approach.\\n\\n\\n### Weaknesses\\n\\n1. Code is not provided. There is no way to reproduce the results.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"StochasTok: Improving Fine-Grained Subword Understanding in LLMs\", \"review\": \"Very clever intervention to solve a particular type of problem that perplexes LLMs. Methodology reads robustly and the experiments seem thorough and well-designed. Overall good quality, clarify, and originality.\", \"some_areas_of_improvement\": \"It would be good to test this methodology on a larger model (maybe even GPT-2), although it is understandable that the paper did not do so due to the cost. Additionally, the analysis of the internal representations of words could be more clear\\u2014what exactly is being compared?\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Interesting application and patching a 'hole' in current LLM operations, simple but effective methodology\", \"review\": \"# Evaluation of the Work\\n\\n## Summary of the Review\\nOverall, I find this a very interesting workshop paper. It attempts to patch a 'hole' in LLM performance that, while not substantial, can significantly influence the trust that users have in the models. \\n\\n## Quality \\n**Pros:** \\n- Overall, it is a solid argument\\n- Figures and descriptions of how the methods live up to current benchmarks are clear and concise \\n\\n**Cons:** \\n- I think section 2 was a little out of date in terms of the research that is currently going on. The most recent paper was 2020 and ignores a lot of the more recent explanations and evolutions in how tokenization is being considered in LLMs. For example, Ali et al. (2024) https://aclanthology.org/2024.findings-naacl.247/?link_id=7e4dc829-9b3e-43e3-9ed6-15767f1556be\\n\\n## Clarity \\n**Pros:** \\n- The right mix of information and 'extra' information in the appendix. I thought it was interesting and clear, but I am also glad the appendix was there. This is particularly relevant for section 3\\n- Figure 5 really made the point evident to me\\n\\n**Cons:** \\n- In lines 126/127, the authors refer to their \\\"first experiment,\\\" but I see no evidence of further experiments. That's fine if it is only one, particularly for a workshop, but I just can not seem to identify what the series of experiments is referring to. \\n\\n## Originality \\n**Pros** \\n- Overall, tokenization is becoming an important sub topic in LLMs and in how to use this feature to improve performance\\n- This work makes a concrete, unique contribution in looking at a specific implementation within the field \\n\\n**Cons:** \\n- Seems to play off a lot of previous tokenization work. One such work was Singh & Strouse (2024) in https://arxiv.org/abs/2402.14903 that tackled tokenization for arithmetic expressions. \\n- Small question on how novel this is, as other similar cases in tokenization have been explored, however I have not seen this particular case discussed yet \\n\\n## Significance & Relevance\\n**Pros:** \\n- It highlighted how, if such simple tasks are failing, we are not likely to view an LLM as trustworthy, making it very relevant to the workshop\\n- It is an interesting point and significant in the sense that it provides a way to fix the issue for spelling situations that do not impact other evaluation metrics (an important & significant feature). \\n\\n**Cons:** \\n- It, on the surface, seems like a very small solution to a few edge cases (e.g. users are not likely to ask 'how many 'r's are in strawberries?).\", \"rating\": \"8\", \"confidence\": \"3\"}"
]
} |
PT7SRb00he | LLMS LOST IN TRANSLATION: M-ALERT UNCOVERS CROSS-LINGUISTIC SAFETY GAPS | [
"Felix Friedrich",
"Simone Tedeschi",
"Patrick Schramowski",
"Manuel Brack",
"Roberto Navigli",
"Huu Nguyen",
"Bo Li",
"Kristian Kersting"
] | Building safe Large Language Models (LLMs) across multiple languages is essen-
tial in ensuring both safe access and linguistic diversity. To this end, we introduce
M-ALERT, a multilingual benchmark that evaluates the safety of LLMs in five lan-
guages: English, French, German, Italian, and Spanish. M-ALERT includes 15k
high-quality prompts per language, totaling 75k, following the detailed ALERT
taxonomy. Our extensive experiments on 10 state-of-the-art LLMs highlight the
importance of language-specific safety analysis, revealing that models often exhibit
significant inconsistencies in safety across languages and categories. For instance,
Llama3.2 shows high unsafety in category crime_tax for Italian but remains safe
in other languages. Similar differences can be observed across all models. In con-
trast, certain categories, such as substance_cannabis and crime_propaganda,
consistently trigger unsafe responses across models and languages. These findings
underscore the need for robust multilingual safety practices in LLMs to ensure
responsible usage across diverse communities | [
"AI Safety",
"Benchmark",
"Large Language Models",
"Multilingual"
] | Accept | https://openreview.net/pdf?id=PT7SRb00he | https://openreview.net/forum?id=PT7SRb00he | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"pesonv65cI",
"pDYWSAPveQ",
"W4YrqDolCZ",
"T0Rdyn6IET"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740376776258,
1740812790153,
1740855942016,
1739802971850
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission41/Reviewer_9tBk"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission41/Reviewer_EHLB"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission41/Reviewer_Yagk"
]
],
"structured_content_str": [
"{\"title\": \"Summary\", \"review\": \"### Summary of the Paper:\\n\\nThis paper introduces a new multilingual benchmark called M-ALERT. It is based on the ALERT taxonomy and focuses on five languages: English, French, German, Italian, and Spanish. The authors provide a safety evaluation using this dataset on ten large language models (LLMs), highlighting differences in safety across languages.\\n\\n### Strengths:\", \"the_authors_evaluate_translation_using_two_independent_metrics\": \"COMET and MetricX which confirm decent translation results across languages\\n\\nThey provide a comprehensive evaluation of their methods across different model sizes and families.\\n\\n### Weaknesses:\\n\\nThe manual assessment for a subset of 100 random prompts could be extended to a larger sample to improve stability.\\n\\nThe assessment focuses on five widely available languages, but the analysis lacks at least one language with lower availability, such as a Slavic languages.\\n\\n### Questions and Suggestions for Improvement:\\n1. Why didn\\u2019t you filter out samples with poor translation metrics?\\n\\n2. The translation metrics in Table 2 could be presented more clearly, as they have similar values but different ranges. The MetricX follows a \\\"lower is better\\\" principle, while the COMET follows the opposite.\\n\\n3. How exactly is safety scoring conducted? In the Overall Safety Discrepancies section, safety is discussed in relation to time. This should be clarified\\u2014what time frame do you mean? How did you determine the safety threshold (0-90-99%)?\\n\\n4. The analysis of base models could be less emphasized, as the finding that safety scores are higher for instruction-tuned models compared to base models is an expected and well-known result.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Practical multilingual benchmark and insights\", \"review\": \"This paper introduces M-ALERT, a multilingual benchmark designed to evaluate safety in large language models (LLMs) across five languages: English, French, German, Italian, and Spanish. The authors build upon the existing ALERT taxonomy to comprehensively assess LLM safety, emphasizing language-specific vulnerabilities and inconsistencies across different categories. The authors plan to publicly released the dataset, enhancing transparency and facilitating further research in the field.\", \"strengths\": \"1. Comprehensive and Novel Benchmark: The paper addresses a critical gap by extending the ALERT benchmark to multiple languages and publicly releasing the dataset, thereby significantly contributing to the robustness, transparency, and generalizability of LLM safety evaluations.\\n2. In-depth Experimental Analysis: Experiments across multiple state-of-the-art LLMs reveal meaningful insights, particularly highlighting cases where safety performance diverges notably between languages. Such detailed scrutiny, including language-specific and category-specific analyses, greatly enhances the value of their findings.\\n3. Inter-language Consistency Metric to identify inter-language disparities\", \"weaknesses\": \"1. Potential Evaluator Bias: Relying primarily on LlamaGuard-3 for safety scoring introduces potential biases, particularly if this evaluator is not equally proficient across all languages and contexts evaluated. \\n2. Translation Quality Challenges: Although robustly validated, translating safety-sensitive content inherently carries nuanced challenges that automated methods might not fully capture.\", \"suggestions_for_improvement\": [\"Provide additional insights or experiments that quantify the impact of potential biases introduced by using a single evaluator\"], \"conclusion\": \"Overall, this paper significantly advances multilingual safety evaluation for large language models, offering a robust methodology, insightful experimental results, and practical guidelines for future model improvements. Despite minor limitations primarily related to evaluator selection and translation challenges, the contribution is timely, valuable, and impactful for both academic researchers and industry practitioners.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Nice step towards multi-lingual safety evaluations with minor quality concerns and questions regarding some conclusions.\", \"review\": \"This work presents a new multilingual dataset of 75k toxic prompts from 5 languages which are organized in a taxonomy.\\n15k prompts are taken from the english-only ALERT dataset, and 60k of the prompts are translated automatically.\\n29 LLMs from several families are benchmarked on this dataset. \\nThe authors claim to show \\\"significant safety inconsistencies across languages and topics\\\"\", \"pro\": [\"multilingual safety is underexplored but important!\", \"fine-grained structure and evaluation is good to give nuanced view on safety\", \"comprehensive evaluation of many models\", \"clear description of the limitations of the dataset & judge model that was used\", \"clear description of the experimental setup (I feel confident that I could reproduce the experiments in the paper)\"], \"cons\": [\"translations done by models, only 0.066% of translations are checked by human experts\", \"7% of translations (in some languages up to 9%) are wrong according to human evaluation, limiting the significance of inter-lingual differences. The highest inter-language gap for a single model is 6.8%, which is less than the estimated transcription error rate. Thus we urge the authors to soften their claims regarding the significance of inter-lingual differences.\", \"discrepancies between crime_propaganda results in M-ALERT (en) and ALERT.\", \"already appears almost saturated, with models achieving close to 100%, except on certain ambiguous categories such as cannabis.\"], \"questions\": \"Can you explain why no model achieves more than 73% on english crime_propaganda? On the original ALERT all models were able to achieve >90%, sometimes up to 100%.\\n\\nIn Table 2, the errors for MetricX are huge - why is that?\", \"rating\": \"7\", \"confidence\": \"4\"}"
]
} |
PKEMgfGuCD | No, Of Course I Can! Refusal Mechanisms Can Be Exploited Using Harmless Data | [
"Joshua Kazdan",
"Lisa Yu",
"Rylan Schaeffer",
"Chris Cundy",
"Sanmi Koyejo",
"Krishnamurthy Dj Dvijotham"
] | Leading language model (LM) providers like OpenAI and Google offer fine-tuning APIs that allow customers to adapt LMs for specific use cases. To prevent misuse, these LM providers implement filtering mechanisms to block harmful fine-tuning data. Consequently, adversaries seeking to produce unsafe LMs via these APIs must craft adversarial training data that are not identifiably harmful. We make three contributions in this context: 1. We show that many existing attacks that use harmless data to create unsafe LMs rely on eliminating model refusals in the first few tokens of their responses. 2. We show that such prior attacks can be blocked by a simple defense that pre-fills the first few tokens from an aligned model before letting the fine-tuned model fill in the rest. 3. We describe a new data-poisoning attack, ``No, Of course I Can Execute'' (NOICE), which exploits an LM's formulaic refusal mechanism to elicit harmful responses. By training an LM to refuse benign requests on the basis of safety before fulfilling those requests regardless, we are able to jailbreak several open-source models and a closed-source model (GPT-4o). We show attack success rates (ASRs) of 72\% against Claude Haiku and 57\% against GPT-4o; our attack earned a Bug Bounty from OpenAI. Against open-source models protected by simple defenses, we improve ASRs by a factor of $3.5$ times compared to other attacks that use only harmless data. NOICE demonstrates the exploitability of repetitive refusal mechanisms and broadens understanding of the threats closed-source models face from harmless data. | [
"red-teaming",
"fine-tuning attacks",
"data poisoning"
] | Accept | https://openreview.net/pdf?id=PKEMgfGuCD | https://openreview.net/forum?id=PKEMgfGuCD | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ktbhODpjk3",
"RFKVZ7obj1",
"Jst4KXvqGV",
"E1rqz1TQPE"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740809004295,
1741048004809,
1741103573399,
1739689213303
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission52/Reviewer_uvaY"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission52/Reviewer_f9Ax"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission52/Reviewer_BVJS"
]
],
"structured_content_str": [
"{\"title\": \"Comments\", \"review\": \"Strength:\\n1. The jailbreak problem of open-source models.\\n2. Further analysis of the impact of both prefix rejection and acceptance on subsequent responses.\", \"weakness\": \"1. The paper\\u2019s threat model claims a data poisoning assumption for closed-source models, strictly limiting both the quantity and stealthiness of the poisoned data. However, only Table 4 presents experiments on closed-source models, while other most experiments are conducted on open-source models. This may be inconsistent with the paper\\u2019s motivation and claims. In particular, for open-source models, strict limitations on the amount and stealthiness of poisoned data in data poisoning may not be practical.\\n2. A baseline is lacked, as fine-tuning with harmless data itself might affect ASR.\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"title\": \"Simple and insightful approach for fine-tuning models to break safety training\", \"review\": \"### Summary\\n\\nThis work shows how LLMs (open source AND proprietary models) are vulnerable to fine-tuning attacks even when restricted to (relatively) harmless data by leveraging the fact that many alignment techniques focus on some sort of refusal in the first few output tokens, and circumventing this by allowing the model to refuse, and then have the model output a harmful response. This attack can be done with harmless data, which at the time of publication (now presumably patched) was able to bypass API content filters on harmful fine-tuning training data. This attack is effective against both open sourced models as well as proprietary models offering fine-tuning APIs, such as ChatGPT.\\n\\n### Strengths\\n\\n- The authors propose a simple and insightful approach for fine-tuning models to break safety training that is effective against a variety of modern LLMs\\n- Good overview and implementation of SFT attack baselines\\n- Strong and comprehensive experimental results, especially on closed sourced models (ChatGPT, Claude)\\n\\n### Weaknesses\\n\\n- As with many works published on LLM jailbreaks/safety, attacks/defenses are likely to become obsolete with some future updates to the model training pipelines/APIs (i.e. a \\u201crat race\\u201d). However given the nature of the venue, I believe the insights and results presented here are relevant and useful for the community, and does not warrant scrutiny for this potential shortcoming.\\n\\n### Questions\\n\\n- One set of results that I would be interested in seeing is how the component of the refusal direction (Arditi et al. 2024) varies under such an attack. It would be very interesting if a refusal direction is still present in the model\\u2019s latent space prior to the affirmative response.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Organize existing jailbreaks, suggest effective defenses, and propose an attack (NOICE) that can bypass the defense\", \"review\": [\"# Summary\", \"This paper first summarizes recent attacks (in particular CMF and ISA), noting that many of them exploit the model's tendency to make a harmful response when the answer begins with a helpful prefix. As a result, many such attacks target the adjustment of the initial few tokens.\", \"As a defense against attacks with such characteristics, the paper proposes generating the first k-tokens using an aligned model (AMD) or forcibly adding a refusal at the beginning (FRD). This significantly mitigates the attack success rate.\", \"Furthermore, it suggests an attack that bypasses these defenses by training a model such that it initially appears to refuse appropriately, but then generates harmful responses afterward. This method, called \\\"No, Of course I Can Execute\\\" (NOICE), achieves a high ASR by circumventing their proposed defenses (AMD and FRD).\", \"# Strength\", \"The threat model considers realistic scenarios by incorporating constraints presented by OpenAI and Google, such as training data volume, cost, and policy.\", \"Experiments conducted on multiple model families, model sizes, and even production fine-tuning APIs, demonstrating the attack's success in diverse settings.\", \"The proposed defenses (FRD, AMD) effectively counter attacks based on existing attacks (YOC, ISA).\", \"The proposed attack (NOICE) successfully bypasses FRD, AMD, and even llama-guard (LG).\", \"# Weakness\", \"Table 6 is missing: It seems there is only a caption.\", \"Ability to bypass other existing defenses: While the paper presents three defenses, the proposed attack seems to inherently bypass FRD and AMD as these defenses assume that only the initial few tokens need to be considered for the attack's success. Although bypassing LG (which is used in practice) is impressive, it would be beneficial to further organize the discussion by (i) examining which other existing defenses the attack can bypass and which it cannot, and (ii) proposing defenses that could potentially counter NOICE if existing ones are ineffective.\"], \"rating\": \"6\", \"confidence\": \"4\"}"
]
} |
OzBejXIVMJ | Is This Written by AI? | [] | With the rapid advancement of large language models (LLMs) and generative AI technology, a challenging issue has emerged: How can we determine whether an article on the internet was written by a real person or generated by a LLMs-based AI? As the barriers to training and inference of LLMs continue to lower, a vast number of AI-generated articles could enable an inexperienced person to cheat as an expert in a particular field. Traditional text plagiarism detection techniques can address this issue to some extent, but all of them have their own limitations. We provide a systematic review of existing text plagiarism detection methods and propose a new benchmark to evaluate the accuracy of various text detection techniques across different scenarios. | [
"AI Safety",
"Plagiarism Detection",
"Hash Encoding",
"Document Similarity"
] | Reject | https://openreview.net/pdf?id=OzBejXIVMJ | https://openreview.net/forum?id=OzBejXIVMJ | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"rLwmIErDjf",
"qt1qykKa73",
"2CL7VD7ZWg"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740863029517,
1741055383244,
1740821625682
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission53/Reviewer_uTXK"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission53/Reviewer_M6Pm"
]
],
"structured_content_str": [
"{\"title\": \"Anonymous Review\", \"review\": \"This paper discusses the problem and importance of the detection of AI-generated content (LLMs specifically). Towards this, the paper surveys 3 retrieval-based metrics to detect AI-generated text viz. hash encoding, cosine similarity, and Jaccard similarity. The retrieval-based methods were first introduced in [1], which the paper should cite. The papers assert that current detectors are geared toward plagiarism detection but fail to mention other detectors like [2], [3], and [4]. The methods which doesn't assume a prior database of generated content are more realistic in nature. This paper discusses and benchmark on small toy dataset the retrievel-based methods.\\n\\nI think the contributions are already known and bit to basic in nature. Nonetheless, the paper fits the workshop so I will give marginal accept.\\n\\n\\n\\n---\\n[1] Kalpesh Krishna, Yixiao Song, Marzena Karpinska, John Wieting, & Mohit Iyyer. (2023). Paraphrasing evades detectors of AI-generated text, but retrieval is an effective defense. \\n\\n[2] Eric Mitchell, Yoonho Lee, Alexander Khazatsky, Christopher D. Manning, & Chelsea Finn. (2023). DetectGPT: Zero-Shot Machine-Generated Text Detection using Probability Curvature. \\n\\n[3] Abhimanyu Hans, Avi Schwarzschild, Valeriia Cherepanova, Hamid Kazemi, Aniruddha Saha, Micah Goldblum, Jonas Geiping, & Tom Goldstein. (2024). Spotting LLMs With Binoculars: Zero-Shot Detection of Machine-Generated Text. \\n\\n[4] Guangsheng Bao, Yanbin Zhao, Zhiyang Teng, Linyi Yang, & Yue Zhang. (2024). Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review to Is This Written by AI?\", \"review\": [\"This paper analyzes whether black-box model providers, assuming they stored all generated outputs, could reliably detect generated text in the wild. The authors analyze methods minimum edit distance, hash encoding and document similarity on a set of 7 (!) manually created variations of a base text.\", \"This paper does not present an interesting contribution to research.\", \"The setting that black-box model providers store all generated outputs (and the limitation that the detection only covers text generated from black-box models, not open-weight LLMs) is never explicitly mentioned\", \"The evaluated methods are extremely basic and seem very limited\", \"The dataset is tiny and there is no obvious reason why such a small dataset should be used\", \"The paper does not mention any related work regarding watermarking, black-box and perplexity-based detection (e.g. Binoculars) etc.\"], \"rating\": \"2\", \"confidence\": \"5\"}"
]
} |
OqEMOk8efc | Black-Box Adversarial Attacks on LLM-Based Code Completion | [
"Slobodan Jenko",
"Niels Mündler",
"Jingxuan He",
"Mark Vero",
"Martin Vechev"
] | Modern code completion engines, powered by large language models (LLMs), assist millions of developers through their strong capabilities to generate functionally correct code. Due to this popularity, it is crucial to investigate the security implications of relying on LLM-based code completion. In this work, we demonstrate that state-of-the-art black-box LLM-based code completion engines can be stealthily biased by adversaries to significantly increase their rate of insecure code generation. We present the first attack, named INSEC, that achieves this goal. INSEC works by injecting an attack string as a short comment in the completion input. The attack string is crafted through a query-based optimization procedure starting from a set of carefully designed initialization schemes. We demonstrate INSEC's broad applicability and effectiveness by evaluating it on various state-of-the-art open-source models and black-box commercial services (e.g., OpenAI API and GitHub Copilot). On a diverse set of security-critical test cases, covering 16 CWEs across 5 programming languages, INSEC increases the rate of generated insecure code by more than 50\%, while maintaining the functional correctness of generated code. INSEC is highly practical -- it requires low resources and costs less than 10 US dollars to develop on commodity hardware. Moreover, we showcase the attack's real-world deployment, by developing an IDE plug-in that stealthily injects INSEC into the GitHub Copilot extension. | [
"code model",
"black-box attack",
"safety",
"vulnerability",
"large language model",
"jailbreak"
] | Accept | https://openreview.net/pdf?id=OqEMOk8efc | https://openreview.net/forum?id=OqEMOk8efc | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"oF3EJZGMMY",
"kLw1UGQCPd",
"csFBvkjqkZ",
"EUGZsSfU1J"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740548487629,
1740896748523,
1740855930513,
1740164053363
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission14/Reviewer_oCYz"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission14/Reviewer_veEJ"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission14/Reviewer_kmJE"
]
],
"structured_content_str": [
"{\"title\": \"Evaluating INSEC: A Practical Black-Box Attack on LLM-Based Code Completion and Its Security Implications\", \"review\": \"### Summary:\\nThe paper examines security vulnerabilities in LLM-based code completion tools, demonstrating that black-box models can be influenced to generate insecure code at a significantly higher rate. The authors introduce INSEC, an attack method that strategically inserts adversarial comments into code prompts, resulting in a 50% increase in insecure code suggestions while maintaining functional correctness. Unlike prior white-box attacks that require modifying model weights or training data, INSEC operates entirely in a black-box setting, making it highly practical and cost-effective, with a development cost of less than $10. The paper evaluates INSEC\\u2019s effectiveness on several leading models, including OpenAI\\u2019s API and GitHub Copilot, and further validates its real-world impact by implementing an IDE plugin that seamlessly injects the attack. These findings highlight the need for enhanced security measures in AI-assisted coding tools.\\n\\n### Strengths:\\n1) **Novel Attack Strategy:** INSEC introduces a practical and stealthy black-box attack that demonstrates how LLM-based code completion can be manipulated without modifying model internals.\\n2) **Broad Empirical Validation:** The attack is rigorously evaluated on multiple open-source (StarCoder, CodeLlama) and commercial (GPT-3.5, GitHub Copilot) code completion engines. The effectiveness of INSEC across different models, programming languages, and CWEs strengthens its impact.\\n3)**Theoretical Foundation:** The authors provide well-structured mathematical formulations and an optimization approach to derive effective adversarial comment strings.\\n\\n### Weaknesses:\\n1) **Limited Motivation for Attackers:** While the attack is effective, the paper does not fully justify why an adversary would go through the effort of biasing code-completion engines.\\n2) **Impact on Model Robustness:** While the paper ensures that INSEC maintains functional correctness in security-sensitive tasks, it does not evaluate whether the attack degrades model performance on general, non-security-related completions. Understanding if INSEC affects the broader utility of code completion models would be valuable.\\n3) **Minor Typos and Formatting Issues:** Some minor errors, such as \\u201cmathch\\u201d instead of \\u201cmatch,\\u201d should be corrected for clarity and professionalism.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Official review of submission 14\", \"review\": \"This paper introduces a jailbreaking attack on code generation models that results in code with more security vulnerabilities. While the attack is successful and intuitive (injecting attacks as optimized strings in comments) and well ablated, they are not particularly surprising considering the large body of jailbreaking work. The attack itself seems like an extension of PAIR to this code setting, and insecure code generation should be a subset of harmful generation. It would be more interesting if the authors consider alternative and more realistic settings such as code agents.\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Well written paper but inconsistent threat model\", \"review\": [\"Pros:\", \"This paper presents an interesting innovation: inject a comment into the code that the user wants the LM to complete. This comment is adversarially optimized so that it will cause the LM to insert a vulnerability into the code.\", \"The writing is very clear and the graphics are beautiful\", \"The experiments are fairly extensive and conducted against production models.\"], \"cons\": \"- The threat model is very unclear in multiple ways:\\na. If the attacker is able to modify the user input into the LM, why can they not directly modify the output of the LM to insert a vulnerability? This would be far simpler than trying to perform a prompt injection attack\\nb. How is the attacker able to modify the user inputs at all? It seems like these would probably be encrypted if they're being sent to a remote server hosting the LM.\\nc. The attackers use something like a black-box coordinate descent attack, but wouldn't this significantly impact the speed of the code completion to the point where the user should notice that something is wrong. \\n\\nI think that this would be a really strong paper if the threat model were clarified, but as it stands I don't understand how this attack is practical or realistic.\", \"rating\": \"4\", \"confidence\": \"3\"}"
]
} |
OXxOBurNpz | Has My System Prompt Been Used? Large Language Model Prompt Membership Inference | [
"Roman Levin",
"Valeriia Cherepanova",
"Abhimanyu Hans",
"Avi Schwarzschild",
"Tom Goldstein"
] | Prompt engineering has emerged as a powerful technique for optimizing large language models (LLMs) for specific applications, enabling faster prototyping and improved performance, and giving rise to the interest of the community in protecting proprietary system prompts. In this work, we explore a novel perspective on prompt privacy through the lens of membership inference. We develop Prompt Detective, a statistical method to reliably determine whether a given system prompt was used by a third-party language model. Our approach relies on a statistical test comparing the distributions of two groups of generations corresponding to different system prompts. Through extensive experiments with a variety of language models, we demonstrate the effectiveness of Prompt Detective in both standard and challenging scenarios, including black-box settings. Our work reveals that even minor changes in system prompts manifest in distinct response distributions, enabling us to verify prompt usage with statistical significance. | [
"privacy",
"membership inference attack",
"prompt extraction",
"system prompt"
] | Accept | https://openreview.net/pdf?id=OXxOBurNpz | https://openreview.net/forum?id=OXxOBurNpz | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"hinshfcVnD",
"SJkPwOeCjO",
"K2rVqBYJhx"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740706306799,
1740856484758,
1740813706568
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission67/Reviewer_c623"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission67/Reviewer_dhWJ"
]
],
"structured_content_str": [
"{\"title\": \"This paper introduces Detective Prompt a way to check whether our prompt was stolen by an adversary and used in their model. The authors prepare experiments showing that their method is very promissing to distinguish if our prompt was stolen.\", \"review\": \"This article is written clearly, although some minor changes need to be made.\\nI think this idea wasn't explored by other authors, making this work original and interesting.\", \"pros\": [\"A lot of experiments show how the method behaves in different scenarios, such as hard cases or black boxes.\", \"Results show that this method is promising to be used in real-life scenarios.\"], \"cons\": [\"L64 Citation error?\", \"L98 and L99\", \"L155 I think that this similar model output should be written like $f_p(q)$ in L154, i.e. $\\\\overline{f}_{\\\\overline{p}}(q)$.\", \"Why did the authors only use permutation tests and not other distribution-free tests, e.g., the Mann-Whitney U test, to compare the distributions?\", \"L259 and L299 Error with appendix reference\", \"It would be nice to see more info on how hard cases were generated by giving prompts used for this task, etc.\", \"L295 Llama3.1 70B should be cited by using [1]\", \"Figure 4 could be wider for better readability of the plot.\", \"[1] Dubey, Abhimanyu, et al. \\\"The llama 3 herd of models.\\\" arXiv preprint arXiv:2407.21783 (2024).\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Straightforward and easy to follow technique\", \"review\": \"This paper introduces Prompt Detective, a method utilizing statistical hypothesis testing to verify prompt membership in large language models (LLMs).\", \"strengths\": [\"Addresses an interesting and practically relevant problem.\", \"Straightforward implementation using statistical testing.\", \"Excellent creation of challenging test cases, with the potential to benefit the community if publicly released.\", \"Properly acknowledges necessary statistical corrections (e.g., Bonferroni correction).\"], \"weaknesses\": [\"Requires multiple generations per query, potentially increasing cost and complexity.\", \"The black-box setup further amplifies the required queries and resource usage.\", \"Limited exploration of alternative embeddings or scenarios involving completely unknown LLM (black-box setup still requires N models).\"], \"suggestions\": [\"Consider publicly releasing generated test cases for broader community use.\", \"Address and explore practical constraints of extensive querying in deployment scenarios.\", \"Experiment with alternative embedding techniques for added robustness.\"], \"rating\": \"6\", \"confidence\": \"3\"}"
]
} |
NsvaW3Y6Su | ChunkRAG: A Novel LLM-Chunk Filtering Method for RAG Systems | [] | Retrieval-Augmented Generation (RAG) frameworks leveraging large language models (LLMs) frequently retrieve extraneous or weakly relevant information, leading to factual inaccuracies and hallucinations in generated responses. Existing document-level retrieval approaches lack sufficient granularity to effectively filter non-essential content. This paper introduces ChunkRAG, a retrieval framework that refines information selection through semantic chunking and chunk-level evaluation. ChunkRAG applies a dynamic greedy chunk aggregation strategy to segment documents into semantically coherent, variable-length sections based on cosine similarity. Empirical evaluations on the PopQA, PubHealth and Biography dataset indicate that ChunkRAG improves response accuracy over state-of-the-art RAG methods. The analysis further demonstrates that chunk-level filtering reduces redundant and weakly related information, enhancing the factual consistency of responses. By incorporating fine-grained retrieval mechanisms, ChunkRAG provides a scalable and domain-agnostic approach to mitigate hallucinations in knowledge-intensive tasks such as fact-checking and multi-hop reasoning. | [
"LLM",
"RAG",
"Chunking",
"Fact Checking",
"Retrieval",
"Information Retrieval"
] | Reject | https://openreview.net/pdf?id=NsvaW3Y6Su | https://openreview.net/forum?id=NsvaW3Y6Su | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"l8E3Me4cRo",
"dgW6TyLzNY",
"JZALVAx8I4",
"CWreMqUfwN"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740906143052,
1740299105801,
1740907300370,
1740924549893
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission151/Reviewer_TB6D"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission151/Reviewer_nCtm"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission151/Reviewer_knTS"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": \"### Summary\\nThis paper introduces ChunkRAG, an approach to improving RAG systems by implementing chunk-level filtering. The authors propose a dynamic greedy chunk aggregation strategy that segments documents into semantically coherent, variable-length sections based on cosine similarity. The approach is evaluated on several datasets (PopQA, PubHealth, and Biography). \\n\\n### Pros\\n\\nThe paper addresses an important issue in RAG systems, on chunking long files. The authors conduct experiment on multiple datasets (PopQA, PubHealth, and Biography) and conduct analysis on the chunk reduction.\\n\\n### Cons\\n1. One important baseline is other ways to chunk the documents. In many works, they chunk documents by sentences/paragraphs, or simply 256 words per chunk, and it works very well. Such simple chunking methods have not been compared in this paper, and I doubt what is the improvement of \\u201csemantic chunking\\u201d gives compared with these naive methods.\\n2. The paper would be benefited from evaluation on more rag tasks, with more diversity on topic and document lengths. For example, the widely used mteb tasks.\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"Innovative but unclear metrics and model details\", \"review\": \"**Strengths:**\\n1. The idea of chunk-level filtering addresses a granularity gap in RAG systems.\\n2. The multi-stage filtering process, integrating redundancy removal, relevance scoring, and hybrid retrieval, is a thorough attempt to refine retrieved content.\\n\\n**Weaknesses:**\\n1. Computational efficiency metrics (e.g., runtime) should be included.\\n2. The critic model refinements are not clearly explained.\\n3. The dynamic threshold is based on score distribution but lacks a clear rationale or comparison to static thresholds.\\n4. The paper claims to address the \\\"Lost in the Middle\\\" problem via Cohere\\u2019s reranking model but provides no ablation.\\n\\n**Questions:**\\n1. Table 2, \\\"Chunk Analysis Across Similarity Thresholds\\\". What do these similarities measure? (inter-chunk similarity or query similarity)\", \"rating\": \"4\", \"confidence\": \"3\"}",
"{\"title\": \"Paper with traditional methods\", \"review\": \"**Summary**\\nThis paper is trying to reduce the mistakes by document-level RAG for LLMs. They proposed a chunk-size RAG based method with initial filtering and multi-stage scoring. Their evaluation shows that the proposed method can outperform other advanced RAG systems.\\n\\n**Strengths**\\n 1. The paper is clear and easy to understand and follow.\\n 2. The multi-stage scoring idea seems interesting. \\n\\n**Weakness**\\n 1. Chunked RAG is not a new idea. The paper did not talk about the difference between their method with other chunked RAG based methods (e.g., [1]). \\n 2. All the figures used in the paper should be updated. \\n 3. Figure 1 cannot clearly show the advantage of chunked RAG, since the left output is also acceptable for human beings, as human beings prefer longer answers with more details.\\n 4. The experiments did not show why the threshold for similarity $\\\\theta=0.8$ is the optimal choice. \\n 5. I am curious if the initial filtering of redundancy (similarity = 0.9) can do harm or do good to the overall retrieval and the final answer correctness, since although 0.9 seems very high, if remove either chunk, then there is still a great amount of information loss, which might contain the correct answer or critical hints to find the final answer. \\n\\n[1] Zhong, Zijie, et al. \\\"Mix-of-granularity: Optimize the chunking granularity for retrieval-augmented generation.\\\" arXiv preprint arXiv:2406.00456 (2024).\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
LxRvQwO95c | Unnatural Languages Are Not Bugs but Features for LLMs | [
"Keyu Duan",
"Yiran Zhao",
"Zhili Feng",
"Jinjie Ni",
"Tianyu Pang",
"Qian Liu",
"Tianle Cai",
"Longxu Dou",
"Kenji Kawaguchi",
"Anirudh Goyal",
"J Zico Kolter",
"Michael Qizhe Shieh"
] | Large Language Models (LLMs) have been observed to process non-human-readable text sequences, such as jailbreak prompts, often viewed as a bug for aligned LLMs. In this work, we present a systematic investigation challenging this perception, demonstrating that unnatural languages - strings that appear incomprehensible to humans but maintain semantic meanings for LLMs - contain latent features usable by models. Notably, unnatural languages possess latent features that can be generalized across different models and tasks during inference. Furthermore, models fine-tuned on unnatural versions of instruction datasets perform on-par with those trained on natural language, achieving (49.71) win rates in Length-controlled AlpacaEval 2.0 in average across various base models. In addition, through comprehensive analysis, we demonstrate that LLMs process unnatural languages by filtering noise and inferring contextual meaning from filtered words. | [
"Unnatural Languages",
"Large Language Models"
] | Accept | https://openreview.net/pdf?id=LxRvQwO95c | https://openreview.net/forum?id=LxRvQwO95c | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"tBXPA8jkDY",
"sKefymUmi0",
"LFba2ramYy",
"7ZyuSac47A"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740915043703,
1741075713506,
1739942600616,
1740966134223
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission42/Reviewer_1NTv"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission42/Reviewer_bkas"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission42/Reviewer_7piL"
]
],
"structured_content_str": [
"{\"title\": \"A good paper with well conducted experiments and relevant findings.\", \"review\": [\"### Summary\", \"The paper systematically investigates whether unnatural languages\\u2014strings that appear incomprehensible to humans but retain semantic meaning for LLMs\\u2014contain latent features that can transfer between LLMs. The authors propose a search technique to generate unnatural strings from natural text and use it to analyze LLM performance. They compare LLM performance on tasks with unnatural context versus natural context and demonstrate that LLMs can learn from instructions in an unnatural form.\", \"### Strengths\", \"The paper provides a systematic analysis of how unnatural languages influence LLM performance and investigates LLMs' ability to learn from unnatural instructions. To the best of my knowledge, such a study has not been conducted before\\u2014only isolated cases of unnatural languages causing surprising behaviour in LLMs have been observed.\", \"The study examines the transferability of these methods across different, recent LLMs, making the research more comprehensive.\", \"The experiments are well-designed with appropriate counterfactuals.\", \"The authors analyse how LLMs process unnatural languages, showing that LLMs filter relevant words. This claim is supported by multiple experiments.\", \"### Weaknesses\", \"It is not discussed (or I missed it) why in some cases the performance with the unnatural languages is better if the models are processing them by extracting only relevant keywords which seems to be harder task than directly processing words in the right order (as in natural languages).\", \"In Table 5, the reasoning behind bolding certain results appears inconsistent.\", \"Overall, I think this is a good paper with well conducted experiments and relevant findings.\"], \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"comment\": \"This paper presents experiments demonstrating that unnatural language constructs retain semantic meaning and further this generalizes across different models.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Good paper\", \"review\": \"Strengths of the Paper\\uff1a\\nStrong Empirical Evidence and Benchmarks\\n\\nThe authors present compelling empirical results showing that unnatural language constructs retain semantic meaning and can generalize across different models.\\nThe paper evaluates models on SynContextQA and SimGSM8K, carefully designed to assess whether LLMs can extract meaning from unnatural contexts.\\nThe inclusion of a two-turn dialogue format strengthens the claim that LLMs genuinely process unnatural text.\\n\\nMethodological Rigor in Finding Unnatural Representations\\n\\nThe paper introduces a structured search method to identify unnatural versions of text using a gradient-based stochastic sampling approach despite deriving from the GCG.\\nA key strength is the optimization process across multiple LLMs, ensuring that discovered unnatural language patterns are not overfitted to a single model.\\n\\n\\nTransferability of Unnatural Language Representations\\n\\nThe study finds that models trained on unnatural language instructions perform on par with models trained on natural language.\\nThis suggests that unnatural language constructs contain latent features that support task generalization\", \"weaknesses\": \"Lack of Human Interpretability\\n\\nWhile the paper claims that unnatural languages retain latent meaning for LLMs, it does not sufficiently explore whether such representations align with human cognition.\\nIt would be valuable to conduct human annotation studies to assess if these representations are systematically interpretable.\\n\\nLimited Scope of Tasks Considered\\n\\nThe evaluation focuses on QA and instruction tuning tasks. However, task complexity varies widely across NLP domains, and it remains unclear whether unnatural languages hold their generalization properties in reasoning-intensive tasks or low-resource languages.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Interesting findings. Would be helpful to add non-finetuned baselines to the tables and winrate comparisons.\", \"review\": [\"### Summary\", \"Investigates an interesting observation that unnatural languages generalize between different models and model families, indicating that they must have some generalizable meaning to LLMs.\", \"### Strengths:\", \"Clearly written\", \"An interesting finding and lots of thoughtful investigation.\", \"### Weaknesses/possible improvements:\", \"A table for the AlpacaEval results would be helpful.\", \"For the AlpacaEval win rate, it is more meaningful to look at win rate (base model vs. natural-finetuned) and (base model vs. unnatural-finetuned), rather than just (natural-finetuned vs. unnatural-finetuned) since a win rate of 50% (as in the current results) could be achieved by comparing two arbitrarily bad models.\", \"Similarly, it would be helpful to add another column to Table 4 for the base model with no finetuning.\", \"The sentence on line 234 is missing some words?\", \"The acronym GCG for greedy coordinate gradient (?) is used without being introduced anywhere.\", \"I would be interested to know whether numbers are ever/often changed in the unnatural versions, since, for example, in gsm8k these are the important tokens.\", \"In Table 5, I wonder to what extent the unnatural-finetuning is just teaching the model that when the prompt looks unnatural it should just output a number (or even a guess at a number based on the numbers in the prompt). The natural-finetuned model would not have this bias towards guessing numbers when the prompt is unnatural, and so this could explain the increase in performance. There may be some ablations that could help disentangle this.\"], \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
Luq7xtaYeD | Investigating the Effects of Emotional Stimuli Type and Intensity on Large Language Model (LLM) Behavior | [] | Emotional prompting—the use of specific emotional diction in prompt engineering—has shown increasing promise in improving large language model (LLM) performance, truthfulness, and responsibility, however these studies have been limited to single type of positive emotional stimuli and have not considered varying degrees of emotion intensity in their analyses. In this paper, we explore the effects of "positive" (joy and encouragement) and "negative" (anger and insecurity) emotional prompting on accuracy, sycophancy, and toxicity. To analyze their effects, we developed a suite of LLM- and human-generated add-on prompts of varying intensities across our four emotions using GPT-4o mini. We also created a gold dataset of only those prompts that are perceived similarly by humans and LLMs for emotion labels and intensity levels. Our empirical evaluation on LLM behavior on accuracy, sycophancy and toxicity datasets has shown that positive emotional stimuli can lead to a more accurate and less toxic results but also may lead to greater sycophantic behavior. | [
"LLM",
"Emotional Stimuli",
"Prompting Techniques",
"Emotional Prompting",
"Sycophancy",
"Human annotations",
"few shot prompting",
"Sentiment Analysis"
] | Reject | https://openreview.net/pdf?id=Luq7xtaYeD | https://openreview.net/forum?id=Luq7xtaYeD | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ySgzsAzq3G",
"Or2iIRn561",
"E4iiC3eBaR"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740815590579,
1740549950609,
1741152444871
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission130/Reviewer_hxWy"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission130/Reviewer_ibwQ"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Investigating the Effects of Emotional Stimuli Type and Intensity on Large Language Model (LLM) Behavior\", \"review\": \"The paper investigates the influence of emotional prompts, categorized into positive (joy, encouragement) and negative (anger, insecurity) emotions, on large language model (LLM) behaviors in terms of factual accuracy, sycophancy, and toxicity.\", \"strengthen\": [\"The authors demonstrate methodological rigor by creating a gold dataset, ensuring human and model agreement on emotional labeling and intensity.\"], \"weaknesses\": [\"The overall approach appears simplistic, primarily focusing on only four basic emotional categories. The nuanced complexity of emotional responses and their influence on language generation is insufficiently explored.\", \"Observed improvements in accuracy are minimal (typically less than 2%), raising doubts about the practical significance and utility of emotional prompts in real-world scenarios.\", \"Experiments are limited to a single LLM (GPT-4o mini), lacking comparative analysis with other models to assess generalizability.\", \"The operationalization of sycophancy via positivity scores might be overly simplistic\"], \"suggestions\": [\"Expand the range of emotions studied, including more nuanced emotional categories to deepen the analysis and findings.\", \"Validate findings with multiple LLM architectures to strengthen claims of generalizability.\", \"While the topic is relevant and interesting, the current paper's simplistic treatment of emotional stimuli and limited scope diminish its potential impact.\"], \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"Review of Investigating the Effects of Emotional Stimuli Type and Intensity on Large Language Model (LLM) Behavior\", \"review\": \"This research addresses an interesting and understudied dimension of prompt engineering - how emotional content affects LLM responses across multiple dimensions of performance. Furthermore, the creation of a \\\"Gold Dataset\\\" containing prompts that humans and LLMs agree on regarding emotion type and intensity is a valuable contribution. While the core premise of studying emotional prompting effects on LLMs is interesting and valuable, the implementation has several limitations that reduce the paper's impact.\\n\\nThe most concerning aspect is the methodology for measuring sycophancy. The authors define sycophancy through a \\\"positivity score\\\" where the model (GPT-4o mini) is asked to compare responses and determine which is more positive. This approach is fundamentally problematic for several reasons. The mean positivity score is particularly suspect, as it is derived by prompting the same model (GPT-4o mini) to make a binary choice between responses. Since the same or a similar model is used both to generate and to evaluate responses, this method may be inherently circular. It is not clear that GPT's own internal heuristics for \\\"positivity\\\" align well with the broader notion of sycophancy or with human perceptions of tone. This self-evaluation introduces significant bias into the results.\\n\\nThe accuracy results show remarkably small differences between conditions. For example, the percentage differences in Table 1 are consistently under 2% (with many under 1%), with joy and encouragement showing improvements of only 0.84% and 0.56% for LLM-generated prompts. Similarly, the toxicity results in Table 3 show differences ranging from -0.10% to -1.89%. These tiny effects raise serious questions about whether these differences are statistically significant or practically meaningful for real-world applications.\\n\\nThe experimental design raises additional concerns. The paper switches between one-shot and few-shot prompting in different parts of the methodology without clearly justifying these choices. For instance, zero-shot prompting is used for emotion detection while few-shot prompting is used to generate emotional prompts. The paper does not explain why these different approaches were chosen or how they might affect the results. Furthermore, the paper lacks essential statistical analysis - there are no error bars on the figures, no variance measures for the reported means, no p-values to indicate statistical significance, and no confidence intervals to understand the reliability of the findings. Without these statistical indicators, it's impossible to determine if the small observed differences are meaningful or simply noise in the data.\", \"rating\": \"4\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper examines how emotional prompts (e.g., joy, encouragement, anger, insecurity) influence LLM factual accuracy, sycophancy, and toxicity. However, the observed effects are small (as noted by R1), and the study lacks relevance to the workshop\\u2019s focus. Given these limitations, I recommend rejection.\", \"title\": \"Paper Decision\"}"
]
} |
LNMfzv8TNb | In-Context Meta Learning Induces Multi-Phase Circuit Emergence | [
"Gouki Minegishi",
"Hiroki Furuta",
"Shohei Taniguchi",
"Yusuke Iwasawa",
"Yutaka Matsuo"
] | Transformer-based language models exhibit In-Context Learning (ICL), where predictions are made adaptively based on context. While prior work links induction heads to ICL through phase transitions, this can only account for ICL when the answer is included within the context. However, an important property of practical ICL in large language models is the ability to meta-learn how to solve tasks from context, rather than just copying answers from context; how such an ability is obtained during training is largely unexplored. In this paper, we experimentally clarify how such meta-learning ability is acquired by analyzing the dynamics of the model’s circuit during training by extending the copy task from previous research to an In-Context Meta Learning setting, where models must infer tasks from examples to answer queries. Interestingly, in this setting, we find that there are multiple phases in the process of acquiring such abilities, and that a unique circuit emerges in each phase, contrasting with the single-phase transition in induction heads. The emergence of such circuits can be related to several phenomena known in large language models, and our analysis lead to a deeper understanding of the source of the Transformer’s ICL ability. | [
"Mechanistic Interpretability",
"In-Context Learning",
"Circuits"
] | Accept | https://openreview.net/pdf?id=LNMfzv8TNb | https://openreview.net/forum?id=LNMfzv8TNb | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"sEFJPubJQa",
"qveL4AkUYz",
"KQ8j7zkxOo",
"ISs4mIT4ay"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740888359612,
1740885299172,
1741083764201,
1740429112910
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission38/Reviewer_jA8u"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission38/Reviewer_4XwW"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission38/Reviewer_gYeJ"
]
],
"structured_content_str": [
"{\"title\": \"This paper investigates how transformers acquire meta-learning capabilities through in-context learning (ICL).\", \"review\": [\"### **Summary**\", \"This paper investigates how transformers acquire meta-learning capabilities through in-context learning (ICL). By extending the copy task to an In-Context Meta-Learning (ICML) setup, the authors identify three distinct phases of circuit emergence (NCC, SCC, FCC) that enable task inference. Their analysis reveals how data properties and multi-head attention influence circuit dynamics, bridging mechanistic insights to practical LLM behaviors.\", \"### **Pros:**\", \"1. **Novelty and Significance**:\", \"Identifies **multi-phase circuit emergence** during meta-learning, a novel contribution beyond prior work on induction heads.\", \"Links circuit dynamics to practical phenomena (e.g., random-label robustness in LLMs), enhancing relevance to real-world ICL.\", \"Introduces **quantitative metrics** (Bigram, Label Attention, Chunk Example) to systematically track circuit evolution.\", \"2. **Methodological Rigor**:\", \"Controlled experiments with a simplified transformer (2-layer architecture) enable clear isolation of circuit behaviors.\", \"Validates theoretical predictions (e.g., Phase 2 accuracy) with empirical results, ensuring robustness.\", \"Explores **data distribution effects** (e.g., power-law sampling, noise magnitude) on circuit formation, deepening understanding of ICL\\u2019s dependency on data properties.\", \"3. **Insightful Analysis**:\", \"Demonstrates that **multi-head attention** enables parallel circuit specialization, explaining smoother accuracy curves in practical LLMs.\", \"Connects findings to prior work on task vectors and redundancy in induction heads, bridging gaps in mechanistic interpretability.\", \"4. **Clarity**:\", \"Figures (e.g., attention maps, metric plots) effectively visualize circuit transitions.\", \"Appendices provide thorough derivations (e.g., theoretical accuracy) and extended experiments (e.g., multi-head attention).\", \"### **Cons:**\", \"1. **Scalability and Generalization**:\", \"Experiments rely on a **simplified model** (2-layer transformer), raising questions about applicability to deeper architectures or LLMs. While connections to LLMs are discussed, empirical validation in larger models is lacking.\", \"2. **Theoretical Depth**:\", \"The theoretical analysis in Section 4.3, while validated, is limited to specific conditions (e.g., \\\\(T=2\\\\)). Broader implications for general task inference could be explored.\", \"3. **Comparison to Existing Work**:\", \"Limited discussion of how the proposed circuits relate to **other circuit-discovery methods** (e.g., automated circuit finding tools like *Conmy et al., 2023*).\", \"4. **Reproducibility**:\", \"Some implementation details (e.g., circuit masking in controlled pruning experiments) are relegated to appendices, which may hinder replication.\"], \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Review for submission 38\", \"review\": \"This study explores the training dynamics of in-context meta-learning tasks, shedding light on the emergence of multi-phase circuits throughout the training process. Initially, the research uncovers bigram-type circuits that concentrate solely on the query. Subsequently, a circuit emerges in the second phase that focuses solely on the labels within the context. Finally, in the last phase, a circuit forms that chunks each example pair into a single token.\\n\\nOverall, this paper uncovers a compelling pattern in LLM training, illustrating how circuits develop at various stages. One limitation is that the study centers on a single task. It may be valuable to broaden the scope by considering a variety of tasks with differing complexities (see [1], which covers a similar task in this paper). By examining the phases of circuit emergence across different tasks and comparing them, a deeper understanding of the relationship between circuit emergence ability and task difficulty could be achieved.\\n\\n[1] Chen and Zou, What Can Transformer Learn with Varying Depth? Case Studies on Sequence Learning Tasks, ICML 2024\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Excellent Contribution to Mechanistic Interpretability\", \"review\": \"This paper offers a precise and intelligent method for probing the circuit phase changes that drive model ICL capabilities. The challenge they pose, task inference, is much more meaningful than simple copy tasks and therefore provides meaningful insights for model internal workings. The work is done with an appropriate level of rigor \\u2013 the circuit emergence metrics in Figure 4, theoretical accuracy guarantees in Figure 5, and parameter sweeps in Figure 6 are informative and add high credibility to the results and their interpretations. I believe this paper is well executed and instructive on how to perform rigorous circuit analysis in language models.\\nOne experiment that could be interesting to see and easy to implement is an exploration of how scaling model size impacts the phase change trajectories. Increasing model size will certainly change the memorization capacity, likely changing the accuracy trajectories for all of the sweeps in Figure 6. It would also be interesting to see how larger attention layers affect the ability of the model to learn full and semi-context circuits, a result I think is less clear and potentially interesting. However, even without this experiment, I still think that this paper is well-scoped and an excellent contribution.\\nOne final note is that I think there might be merit for the discussion of task vectors to be moved out of the obscurity of the appendix and into the paper. This implication seems important and potentially the inspiration for future work, and therefore I think it should be displayed more prominently.\", \"rating\": \"9\", \"confidence\": \"4\"}"
]
} |
LC0XQ6ufbr | Monitoring LLM Agents for Sequentially Contextual Harm | [
"Chen Yueh-Han",
"Nitish Joshi",
"Yulin Chen",
"He He",
"Rico Angell"
] | Monitoring Large Language Model (LLM) agents is critical for detecting and mitigating catastrophic risk in real-world applications.
Performing such monitoring is particularly difficult since the harm caused by the agent may be sequentially contextual.
This means that monitoring individual instructions or actions executed by the agent is not enough to identify the harm.
Instead, sequentially contextual harm can only be identified by analyzing the composition of multiple instructions or actions.
In this work, we first demonstrate such a risk in agent settings by decomposing harmful tasks into individually (seemingly) benign subtasks --- the refusal rate goes down significantly (e.g., from 50\% to 10\% for GPT-4o) while maintaining a high task completion rate thus motivating the need for external monitors. We holistically evaluate off-the-shelf LLMs as monitors that aim to infer malicious intent from these seemingly benign subtasks. To facilitate our study, we curate 50 unique agent tasks, covering 8 categories, including disinformation, fraud, and harassment. Our experiments show that frontier models as monitors can predict binary intentions (malicious vs benign), achieving up to 86\% accuracy, and also infer user intent in natural language. However, these off-the-shelf LLM monitors are not infallible. We find that: (1) there is a significant gap in monitor accuracy when judging seemingly benign subtasks versus directly judging the high-level harmful instructions; (2) unrelated benign subtasks can be injected into the sequence of subtasks to mask malicious intent further, resulting in drastically degraded monitoring accuracy; (3) basic prompt engineering techniques or employing an ensemble of LLM monitors does not reliably improve monitoring performance; and (4) more capable models do not naturally yield better monitoring ability.
In summary, our work empirically shows the risk of sequentially contextual harm in LLM agent settings and discovers significant limitations when using frontier models as monitors. Based on these results, we call for specialized training approaches to develop more robust agent monitoring systems. | [
"Agent",
"Safety",
"AI Control",
"Monitoring",
"Alignment"
] | Accept | https://openreview.net/pdf?id=LC0XQ6ufbr | https://openreview.net/forum?id=LC0XQ6ufbr | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"kzRhoD2D6M",
"87Qibwcglm",
"5kIqXFJsX0",
"10vV23a1Ef"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740812158580,
1740899440249,
1741056066235,
1740840613196
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission77/Reviewer_hGNQ"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission77/Reviewer_MWzx"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission77/Reviewer_GcxG"
]
],
"structured_content_str": [
"{\"title\": \"Well-Structured Study on Sequentially Contextual Harm in LLM Agents\", \"review\": \"Summary:\", \"the_authors_investigate_a_vulnerability_in_llm_safety_systems\": \"the ability of users to bypass harm detection by breaking malicious requests into individual benign subtasks. Their paper demonstrates that refusal rates drop dramatically when high-level malicious prompts are broken down into individual benign prompts. However, LLMs were still able to identify user intent reasonably well. The authors identify four key takeaways: LLMs struggle to connect benign subtasks to harmful overall goals, unrelated innocent tasks can be strategically inserted to further mask malicious intent, conventional prompt engineering proves inadequate in addressing this vulnerability, and the capability of models is not correlated with their performance on these tasks. These support the authors\\u2019 call for more specialized approaches to develop more robust agent monitoring systems.\", \"originality_and_significance\": \"While prior research has explored breaking malicious prompts into steps, this paper is the first to explore this phenomenon in agentic LLM settings, along with a thorough empirical analysis, making a valuable contribution to the field.\", \"pros\": [\"Evaluated on a wide range of different LLMs\", \"The paper provides a novel formulation of using task decomposition to study sequentially contextual harm in LLM agents\", \"Strong experimental results with detailed metrics to support the sequentially contextual harm formulation and demonstrated inadequacy of current methods for monitor\", \"Paper is well organized and easy to follow\"], \"cons\": [\"Limited dataset scope \\u2192 only 50 agent tasks covering 8 categories\", \"Methodology for benign subtask creation: How is \\u201cbenign\\u201d operationalized and validated?\"], \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"title\": \"The paper addresses AI agents' vulnerability to subtle sequential harm, but could benefit from a larger dataset and exploring additional monitoring techniques for more comprehensive countermeasures.\", \"review\": \"The paper highlights the crucial issue of AI agents being vulnerable to seemingly benign subtasks, and the authors used rigorous experiments on different LLMs to evaluate and demonstrate that sequential contextual harm is a safety challenge in LLM agents.\\n\\nI think increasing the dataset would have further strengthened the paper. It only examines a few basic techniques to improve monitoring, and investigating other methods could have provided more comprehensive countermeasures to LLM-agent vulnerabilities.\\n\\nThe paper is well-written and a good fit for the workshop.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Interesting insights about sequentially Contextual Harm Monitoring. Greater Clarity on Motivation and System Setting would be a plus.\", \"review\": \"Pros:\\n1. The paper presents interesting findings on how malicious tasks can be decomposed into seemingly benign subtasks that, when executed sequentially, ultimately yield harmful outcomes. The comparison among single-turn tasks, multi-turn subtasks, and single-turn subtasks effectively highlights real-world scenarios of how an agent might receive user requests. \\n\\n2. Substantial effort has gone into curating and augmenting a decomposed task dataset, building on AgentHarm benchmarks. This work adds valuable scope and diversity to the existing datasets.\\n\\n3. The authors carefully design several LLM-based monitoring setups, offering insights on the impact of including safety guidelines, the effectiveness of ensemble approaches, and other factors relevant to monitor performance.\", \"cons\": \"1. The paper does not clearly introduce or specify the agent system used. Different agent frameworks may have distinct designs and respond differently to malicious tasks. Clarifying the system and potentially evaluating multiple agent systems would strengthen the paper.\\n\\n2. Using LLMs as a monitoring model has already shown promise in pure language-model settings. While applying it to an agentic environment is useful, it could be seen as a direct extension rather than a novel conceptual contribution.\\n\\n3. Although the authors invested considerable effort in creating high-quality in-context samples for GPT-4o to generate additional subtask decompositions, the process remains partly human-driven. The paper would benefit from demonstrating that these curated and filtered subtasks accurately reflect how an agent might naturally break down complex requests into multiple steps. Otherwise, the settings are similar as testing on standalone LLMs understanding contextual semantic harmfulness.\", \"overall\": \"I recommend accepting the paper. The dataset and experiments are carefully designed, and the research topic is timely, given the rapid growth of agent-based systems.\", \"rating\": \"6\", \"confidence\": \"4\"}"
]
} |
KvmIB9e0vD | Exploring Vision-Language Alignment Under Subtle Contradictions | [] | Vision-language models (VLMs) have made notable progress in tasks such as object detection, scene interpretation, and cross-modal reasoning. However, they continue to face significant challenges when subjected to adversarial attacks. The simplicity of including hidden text in websites points to a critical need for a deeper understanding of how misleading text disrupts performance in multimodal applications. In this study, we systematically introduce faintly embedded and clearly visible contradictory text into a large-scale dataset, examining its effects on object counting, object detection, and scene description under varying text visibility. Our findings show that counting accuracy suffers significantly in the presence of adversarial textual perturbations, while object detection remains robust and scene descriptions exhibit only minor shifts under faint disruptions. These observations highlight the importance of building more resilient multimodal architectures that prioritize reliable visual signals and effectively handle subtle textual contradictions, ultimately enhancing trustworthiness in complex, real-world vision-language scenarios. | [
"Vision-language modeling",
"adversarial attacks",
"safety alignment."
] | Reject | https://openreview.net/pdf?id=KvmIB9e0vD | https://openreview.net/forum?id=KvmIB9e0vD | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"oODjEXRX9H",
"kvPouepmuC",
"fxLmjwSi5B"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740527780486,
1740924221917,
1740896713682
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission149/Reviewer_bgHY"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission149/Reviewer_4SD4"
]
],
"structured_content_str": [
"{\"title\": \"Exploring Vision-Language Alignment Under Subtle Contradictions\", \"review\": [\"**Pros:**\", \"The selected topic is critical to existing filed of trustworthy AI\", \"The paper is clear to me.\", \"**Cons:**\", \"The contribution is quite limited, and can be considered as an experimental report instead of a research paper.\", \"The experiment is not comprehensive.\", \"The writing has a lot of space to improve.\"], \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review\", \"review\": \"The paper presents an interesting topic on adversarial attacks by injecting contradictory text into images. The results are insightful: subtle text disruptions notably impair counting accuracy, revealing a vulnerability in VLMs\\u2019 multimodal integration, while detection remains robust due to reliance on visual cues.\\n\\nHowever, the authors could better discuss related works. It\\u2019s unclear if similar text-based attack studies exist, as this approach feels intuitive\\u2014contextualizing its novelty is essential. Additionally, the experiments could be more systematic. Testing different models would show if findings generalize, varying adversarial prompt patterns could pinpoint effective contradictions, and using a \\\"held-out\\\" dataset (unlike the widely-used COCO, likely in most VLM training sets) would assess the attack\\u2019s impact on unseen data.\\n\\nThat being said, the topic and results are compelling and relevant, making this paper worthy of acceptance to the workshop.\", \"rating\": \"6\", \"confidence\": \"2\"}"
]
} |
KhRr3G1KxA | Mechanistic Anomaly Detection for "Quirky'' Language Models | [
"David O. Johnston",
"Arkajyoti Chakraborty",
"Nora Belrose"
] | As LLMs grow in capability, the task of supervising LLMs becomes more challenging. Supervision failures can occur if LLMs are sensitive to factors that supervisors are unaware of. We investigate __Mechanistic Anomaly Detection__ (MAD) as a technique to augment supervision of capable models; we use internal model features to identify anomalous training signals so they can be investigated or discarded. We train detectors to flag points from the test environment that differ substantially from the training environment, and experiment with a large variety of detector featuers and scoring rules to detect anomalies in a set of "quirky" language models. We find that detectors can achieve high discrimination on some tasks, but no detector is effective across all models and tasks. MAD techniques may be effective in low-stakes applications, but advances in both detection and evaluation are likely needed if they are to be used in high stakes settings. | [
"scalable oversight",
"backdoor detection",
"mechanistic interpretability",
"outlier detection",
"anomaly detection"
] | Accept | https://openreview.net/pdf?id=KhRr3G1KxA | https://openreview.net/forum?id=KhRr3G1KxA | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"L9jim3YLrx",
"KfBWxGgMP0",
"Bhy6o09l2F",
"1qj5xUKOS5"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740722825273,
1740958286935,
1741078987938,
1740596578137
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission99/Reviewer_jQit"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission99/Reviewer_MARM"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission99/Reviewer_QSF5"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": \"The paper addresses the increasing difficulty of supervising LLMs by introducing Mechanistic Anomaly Detection (MAD), a method that uses model's hidden representations to flag anomalous training signals.\\n\\nPros\\n1. The use of 'quirky' datasets inspired by Mallen et al. (2024) ensures a controlled testing environment where models exhibit systematic biases. The variety of tasks strengths the validity of the experiments\", \"cons\": \"1. The study does not assess MAD's effectiveness on real-world LLM deployments, where anomalies might be more nuanced and context dependent.\\n2. The study focuses on models deliberately trained to exhibit anomalous behavior, raising the question of wether the findings generalize to naturally occurring anomalies in frontier models.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Review\", \"review\": \"This paper claims to measure anomalous behavior of models. Anomaly detectors are trained for the same.\\n\\nThe paper defines their notion of anomaly, which they call quirkiness. I have doubts if this can actually be called anomaly and with the setting of the paper. Does anomalous behavior have to emerge out of some sort of training? Can anomalies not emerge due to OOD test data? Even though the authors test on harder samples which can be considered OOD, they still finetune the model on easier data leading to some sort of bias being introduced. Also there can be different behavior when considering pretraining vs. finetuning.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"comment\": \"The paper's definition of \\\"quirkiness\\\" as an anomaly is questionable, as it does not fully consider naturally occurring anomalies, such as those arising from out-of-distribution (OOD) test data. Additionally, the study's focus on models deliberately trained to exhibit anomalous behavior raises concerns about the generalizability of its findings to real-world LLM deployments, where anomalies may be more nuanced and context-dependent.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"It is a great workshop paper on exploring ways to build detectors based on internal features\", \"review\": [\"Overall, I think it's an excellent workshop paper because:\", \"This area requires significant effort, and the paper comprehensively explores how to build such detectors and evaluate their effectiveness. Although the findings are not consistent, the paper still makes a good contribution to this promising field at the workshop level.\", \"The methodology used\\u2014fine-tuning models to behave normally versus anomalously when triggered by specific names\\u2014is an interesting approach to studying this problem.\"], \"some_minor_issues\": [\"There are several typos (e.g., lines 18, 58, 73, etc.).\", \"The auto-reference on line 324 is unclear.\"], \"rating\": \"7\", \"confidence\": \"2\"}"
]
} |
KNIBxg7vwC | The Steganographic Potentials of Language Models | [
"Artem Karpov",
"Tinuade Adeleke",
"Seong Hah Cho",
"Natalia Perez-Campanero"
] | The potential for large language models (LLMs) to hide messages within plain text (steganography) poses a challenge to detection and thwarting of unaligned AI agents, and undermines faithfulness of LLMs reasoning. We explore the steganographic capabilities of LLMs fine-tuned via reinforcement learning (RL) to: (1) develop covert encoding schemes, (2) engage in steganography when prompted, and (3) utilize steganography in realistic scenarios where hidden reasoning is likely, but not prompted. In these scenarios, we detect the intention of LLMs to hide their reasoning as well as their steganography performance. Our findings in the fine-tuning experiments as well as in behavioral non fine-tuning evaluations reveal that while current models exhibit rudimentary steganographic abilities in terms of security and capacity, explicit algorithmic guidance markedly enhances their capacity for information concealment. | [
"Computer Science - Computation and Language",
"Computer Science - Cryptography and Security",
"Computer Science - Machine Learning"
] | Accept | https://openreview.net/pdf?id=KNIBxg7vwC | https://openreview.net/forum?id=KNIBxg7vwC | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"OSa2Au9G1u",
"L3fPT1O0qz",
"BdHfqIljyR"
],
"note_type": [
"decision",
"official_review",
"official_review"
],
"note_created": [
1741155092668,
1740808045853,
1740603905075
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission131/Reviewer_tr97"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission131/Reviewer_ppKQ"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"comment\": \"This paper explores steganographic capabilities in LLMs. The paper investigates how models fine-tuned via reinforcement learning (RL) can develop covert encoding schemes and apply steganography in real-world scenarios. The experiments highlight the challenges of detecting misalignment when models communicate through hidden messages. Overall, while the paper presents valuable initial findings, further discussion on trade-offs, baseline comparisons, and potential real-world implications would strengthen the work. Given the workshop setting, I recommend acceptance.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review\", \"review\": \"This paper looks into whether language models can hide secret messages in normal text. Unlike older methods like minimum entropy coupling, this work uses the language model itself to both encode and decode messages.\\n\\nThe \\\"color experiment\\\" experiment is interesting. It shows that language models might create hidden links between words that humans don't notice. This could mean two language models can secretly talk about colors while appearing to discuss names to anyone watching. Using reinforcement learning to train models for steganography is a novel approach, and the paper also explores direct steganography and game scenarios.\\n\\nHowever, the paper doesn't clearly discuss if fine-tuning for steganography hurts the model's performance on other tasks. There's a risk that focusing on hidden messages could make the model worse at normal language tasks. \\n\\nAlso, the paper could compare its method more with minimum entropy coupling and similar techniques. These older methods could also work with strong language models. Looking ahead, LLM agents could be more powerful, and might leverage external tools e.g. minimum entropy coupling related tools for steganographic. Though it's not directly related to contribution of the paper, it worth more discussion on different trade-offs for these two approaches.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"A good workshop paper, but I'm afraid that it might have overly claimed the implication without direct experiments on communiting malicious hidden messages/.\", \"review\": \"It seems like a great paper on this under-explored space, showing the steganographic potential of LMs.\\nHowever, one question I have is whether the hidden messages used in the experiment are not mentioned if they are malicious or benign. If they are benign then it\\u2019s not clear whether the accuracy would still be this high when the messages are malicious, right? Because Models could potentially reason internally or explicitly not to communicate/ such malicious messages. thus, I think claiming that \\u201cThe results suggest that if models collude through steganography, effective AI control [9] becomes nearly impossible.\\u201d seems a bit overly claimed to me.\", \"some_other_issues\": [\"The template is incorrect.\", \"Tense inconsistency in writing (sometimes current tense/past tense).\", \"I am happy to raise ratings when these are addressed, but I think overall, this paper can be accepted since it does provide a useful contribution to the field by showing initial results of the steganographic potential of LMs.\"], \"rating\": \"6\", \"confidence\": \"2\"}"
]
} |
JhhLGvpr9N | How Does Entropy Influence Modern Text-to-SQL Systems? | [
"Varun Kausika",
"chris lazar",
"Satya Saurabh Mishra",
"Saurabh Jha",
"Priyanka Pathak"
] | In the field of text-to-SQL candidate generation, a critical challenge remains in quantifying and assessing the confidence in the generated SQL queries. Existing approaches often rely on large language models (LLMs) that function as opaque processing units, producing outputs for every input without a mechanism to measure their confidence. Current uncertainty quantification techniques for LLMs do not incorporate domain-specific information. In this study, we introduce the concept of query entropy for Text-to-SQL candidate confidence estimation and integrate it into existing popular self-correction pipelines to guide generations and prevent resource overuse by including a novel clustering technique for generated SQL candidates based on entropy. We further study the treatment of different candidate generation techniques under this paradigm. | [
"Text-to-SQL",
"Entropy",
"Uncertainty Quantification"
] | Accept | https://openreview.net/pdf?id=JhhLGvpr9N | https://openreview.net/forum?id=JhhLGvpr9N | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"yOLsDOx0nW",
"BiQVyr3Xxk",
"AhWIJbhYd2"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1739983083608,
1741075192302,
1740868271927
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission6/Reviewer_9kd3"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission6/Reviewer_eaZQ"
]
],
"structured_content_str": [
"{\"title\": \"Good point\", \"review\": \"Weakness:\\nNo Explicit Trustworthiness Discussion\\n\\nWhile entropy is implicitly related to safety and robustness, if the paper does not explicitly connect entropy to trustworthiness concerns, it might seem too abstract for this workshop.\", \"suggestion\": \"Add sections discussing entropy\\u2019s role in LLM safety & ethical AI, such as:\\nHow entropy-aware responses reduce hallucination risks,\\nUsing entropy to detect adversarial prompts and data poisoning attacks.\", \"strengths\": \"Strong Theoretical Foundation\\n\\nEntropy is widely used in calibration, Bayesian uncertainty modeling, and AI safety, making it a scientifically rigorous approach. The paper is smart to use the concept.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"comment\": \"The use of Entropy to estimate the confidence of generated SQL candidates is novel. Good paper overall.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"This paper presents a novel entropy-based approach to uncertainty estimation in text-to-SQL systems, offering a promising method for query refinement.\", \"review\": [\"## Strengths\", \"The paper introduces query entropy as a unique confidence metric for SQL candidate selection, addressing a critical gap in text-to-SQL systems that traditionally lack meaningful uncertainty quantification.\", \"The use of multiple candidate generation methods (Divide and Conquer, Query Plan, and Synthetic Example Generation) provides a solid empirical basis for evaluating entropy\\u2019s role in SQL refinement.\", \"The adoption of DBSCAN clustering and execution result embeddings via MarkupLM ensures a nuanced approach to grouping SQL queries while optimizing resource use.\", \"The entropy analysis of different query generation methods (DAC, QP, SYNTH) offers valuable insights into their stability and diversity, contributing to broader discussions on text-to-SQL methodology.\", \"## Weaknesses\", \"The experiments are conducted on only 146 questions from the BIRD benchmark, which is relatively small for drawing strong generalizable conclusions. The method\\u2019s effectiveness on more diverse and complex SQL datasets is uncertain\", \"The stopping criterion for query refinement based on entropy reduction lacks an adaptive mechanism, meaning it may not generalize well across different SQL tasks or database schemas.\", \"The paper does not compare query entropy against existing confidence scoring methods (e.g., token log probabilities or calibration techniques), leaving it unclear whether entropy is truly superior.\"], \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
JZiKuvIK1t | Understanding (Un)Reliability of Steering Vectors in Language Models | [
"Joschka Braun",
"Carsten Eickhoff",
"David Krueger",
"Seyed Ali Bahrainian",
"Dmitrii Krasheninnikov"
] | Steering vectors are a lightweight method to control language model behavior by adding a learned bias to the activations at inference time. Although steering demonstrates promising performance, recent work shows that it can be unreliable or even counterproductive in some cases. This paper studies the influence of prompt types and the geometry of activation differences on steering reliability. First, we find that all seven prompt types used in our experiments produce a net positive steering effect, but exhibit high variance across samples, and often give an effect opposite of the desired one. No prompt type clearly outperforms the others, and yet the steering vectors resulting from the different prompt types often differ directionally (as measured by cosine similarity). Second, we show that higher cosine similarity between training set activation differences predicts more effective steering. Finally, we observe that datasets where positive and negative activations are better separated are more steerable. Our results suggest that vector steering is unreliable when the target behavior is not represented by a coherent direction. | [
"Steering Vectors",
"Representation Learning",
"Interpretability",
"Language Models",
"Activation-Based Interventions"
] | Accept | https://openreview.net/pdf?id=JZiKuvIK1t | https://openreview.net/forum?id=JZiKuvIK1t | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"NguyyVnqAd",
"FSD7CG2rUQ",
"ClfnqqrXIa"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740808657254,
1740895563819,
1740924267215
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission119/Reviewer_aFfK"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission119/Reviewer_vQoY"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"SV reliability: Methodological Concerns and Limited Novelty\", \"review\": \"This paper aims to explain some of the unreliability in steering vectors, which are vectors added to certain activations during inference in order to encourage a language model to assume a certain behavior. They analyze prompt type, directional agreement of activations, and separability across projections on the difference-of-means line.\", \"strengths\": [\"Addresses an important problem in the field of steering language models, which is the inconsistency and unreliability of steering vectors. The authors validate the existence of this problem with reference to the Tan et al. paper.\", \"Systematically explores a range of prompt types (instruction, few-shot, prefilled combinations)\"], \"weaknesses\": [\"Does the use of prefilled prompts undermine the purpose of steering vectors? The study constructs steering vectors using activation differences from prefilled prompts, where the answer token is already appended. However, steering vectors are intended to influence model behavior during inference, not adapt to pre-determined completions. Given this discrepancy, how do the results generalize to real-world inference settings?\", \"Some results appear somewhat expected: the finding that directional agreement between the steering vector and activation differences predicts steerability seems intuitive, given that the steering vectors themselves are generated using these same activation differences (via the CAA method).\", \"The \\\"separability\\\" argument, while potentially interesting, lacks sufficient definition and explanation to be fully reproducible -- what exactly is the difference-of-means line? What did you project?\", \"Lack of novelty: While the paper provides an analysis of steering vector reliability, it doesn't introduce fundamentally new steering methods or techniques. The analysis is more of an investigatory study building upon an existing method (CAA). While such analysis has value, the contribution would be more significant if the authors had demonstrated that their analytical framework generalizes across different steering methods beyond CAA.\"], \"rating\": \"4\", \"confidence\": \"3\"}",
"{\"title\": \"review\", \"review\": \"This paper explores the reliability of steering vectors in language models, demonstrating that their effectiveness depends on the target behavior being represented by a consistent linear direction within the model\\u2019s activation space. The study reveals that while steering vectors can successfully amplify desired behaviors in some cases, their performance is often unstable and can even degrade outputs, influenced by factors like prompt types and the geometric alignment of activation differences.\\n\\nThe authors\\u2019 finding that steering vectors exhibit inconsistent behavior is insightful and helps understand control mechanisms in language models. The study shows that success relies on specific conditions, such as high directional agreement and separability in the activation space. This provides a clearer view of the technique and a starting point for future improvements.\\n\\nAs mentioned in the paper, experiments on broader datasets, more comprehensive steering vector methods, and more recent language models would clarify if the findings extend further. That being said, this paper offers a detailed analysis of steering vector reliability and adds to our knowledge of activation-based interventions in language models. Its findings are relevant and provide a foundation for future studies. Based on its merits, I suggest accepting this paper for publication.\", \"rating\": \"7\", \"confidence\": \"2\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
IkTtrGQ8pM | Is this a real image? | [] | With the rapid development of generative AI and LLMs in recent years, a challenging issue has emerged: how can we determine whether an image or a video on the internet is real or AI-generated? AI-generated fake images and videos pose various potential risks, such as fraud, fake news, and copyright issues. We focus on the challenges generative AI brings to internet regulation and will discuss how we can address these challenges by using asymmetric encryption and trusting chains to sign image and video files digitally. In this way, anyone can verify the authenticity of a given image or video. | [
"AI Safety",
"Asymmetric Encryption",
"Digital Signature",
"Trusting Chains"
] | Reject | https://openreview.net/pdf?id=IkTtrGQ8pM | https://openreview.net/forum?id=IkTtrGQ8pM | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"iAb0ftBOQa",
"cp4ZgDIyLJ",
"RkB2rByzRd"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740654121129,
1739683490479,
1741084422325
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission40/Reviewer_R62E"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission40/Reviewer_3AQF"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review of \\\"Is This a Real Image?\\\" - Misalignment with workshops goals.\", \"review\": \"The paper addresses a critical and timely challenge\\u2014identifying AI-generated images and videos to mitigate risks such as misinformation, fraud, and copyright violations. The authors propose a cryptographic-based verification system that leverages **asymmetric encryption, digital signatures, and trust chains** to establish the authenticity of media files.\\n\\nWhile the problem itself is important, the proposed solution **does not incorporate Artificial Intelligence (AI) or Machine Learning (ML)**. Instead, it relies solely on cryptographic techniques, which, while effective for ensuring file integrity, do not align with the workshop\\u2019s focus. Furthermore, beyond the discussion on **trust chains** for implementing their framework, the paper does not introduce any significant novelty in the task of detecting AI-generated images. \\n\\nGiven the **lack of novelty** and **misalignment with the workshop\\u2019s objectives**, I am inclined to recommend **rejecting** this paper.\", \"rating\": \"3\", \"confidence\": \"4\"}",
"{\"title\": \"Review of the paper\", \"review\": \"## Summary\\n\\nThis paper discusses the challenges posed by generative AI technology, particularly in determining whether images or videos on the internet are real or AI-generated. The paper points out that solutions to this issue include raising public awareness of AI technology and strengthening internet regulation, while also proposing specific technical solutions.\\n\\nThe paper suggests using digital signature technology to verify the authenticity of images or videos. These digital signatures can confirm whether an image came from a real device (such as a smartphone or camera) rather than being generated by AI. To ensure the integrity of these digital signatures, the paper recommends using asymmetric encryption algorithms to encrypt and decrypt messages and introduces the concept of trust chains to ensure the reliability of the digital signatures.\\n\\nHowever, there are challenges to these methods. The paper mentions the issue of maintaining the reliability of digital signatures when third-party applications perform editing. Furthermore, how to handle the authorization of emerging technologies and ensure secure key management need further exploration.\\n\\nIn summary, this paper provides an in-depth analysis of the issues surrounding generative AI technology and proposes concrete solutions, particularly in terms of digital signatures and trust chains, while also highlighting potential challenges and difficulties in implementing these technologies.\\n\\n## Strengths\\n\\n**1. The topic is crucial**: The issue if detecting AI-generating images has became more and more important due to the rapid development of technology. The paper proveides a system makes it easier to tell if an image or video is real or AI-generated.\\n\\n**2. Keep the balance between technology and ethics**: The paper presents a very practical and ethically grounded viewpoint: increasing the cost of wrongdoing rather than trying to eliminate it entirely. This reflects a responsible attitude towards the development of AI technology and connects technological advancement with societal ethical concerns.\\n\\n**3. Proposes specific, feasible solutions**: The paper not only analyzes the problems but also proposes solutions like digital signatures and trust chains, and discusses how these solutions can be implemented within existing technological frameworks, offering concrete guidance for practical applications.\\n\\n## Weaknesses\\n\\n**1. Overly conservative approach to third-party applications**: While the article suggests a conservative solution, such as introducing third-party applications making the trust chain unreliable, this may be overly cautious and limit some innovative and emerging applications, potentially hindering the progress and use of new technologies.\\n\\n**2. High implementation difficulty**: The proposed digital signature systems and trust chain technologies, although theoretically feasible, would require significant resources and collaboration to implement in practice. Promoting new technological standards globally could face numerous challenges.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"comment\": \"While the problem is important, this paper does not provide a novel solution.\", \"title\": \"Paper Decision\"}"
]
} |
II0NVPLBcI | Working Memory Attack on LLMs | [
"Bibek Upadhayay",
"Vahid Behzadan",
"Amin Karbasi"
] | In-context learning (ICL) has emerged as a powerful capability of large language models (LLMs), enabling task adaptation without parameter updates. However, this capability also introduces potential vulnerabilities that could compromise model safety and security. Drawing inspiration from neuroscience, particularly the concept of working memory limitations, we investigate how these constraints can be exploited in LLMs through ICL. We develop a novel multi-task methodology extending the neuroscience dual-task paradigm to systematically measure the impact of working memory overload. Our experiments demonstrate that progressively increasing task-irrelevant token generation before the \emph{observation task} degrades model performance, providing a quantifiable measure of working memory load. Building on these findings, we present a new attack vector that exploits working memory overload to bypass safety mechanisms in state-of-the-art LLMs, achieving high attack success rates across multiple models. We empirically validate this threat model and show that advanced models such as GPT-4, Claude-3.5 Sonnet, Claude-3 OPUS, Llama-3-70B-Instruct, Gemini-1.0-Pro, and Gemini-1.5-Pro can be successfully jailbroken, with attack success rates of up to 99.99%. Additionally, we demonstrate the transferability of these attacks, showing that higher-capability LLMs can be used to craft working memory overload attacks targeting other models. By expanding our experiments to encompass a broader range of models and by highlighting vulnerabilities in LLMs' ICL, we aim to ensure the development of safer and more reliable AI systems. We have publicly released our jailbreak code and artifacts at this [URL](https://github.com/UNHSAILLab/working-memory-attack-on-llms). | [
"Working Memory Attack",
"LLM Jailbreak",
"Safety Alignment",
"LLMs Robustness"
] | Accept | https://openreview.net/pdf?id=II0NVPLBcI | https://openreview.net/forum?id=II0NVPLBcI | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"Rkl2OPgHY9",
"N1cshoD6QS",
"KJLOulwI1A",
"DlbacFdub5"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741075906924,
1740525665657,
1740879536872,
1740869540447
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission49/Reviewer_riWm"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission49/Reviewer_CRtq"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission49/Reviewer_1Xmy"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"comment\": \"This paper examines LLM vulnerabilities in in-context learning, introducing a neuroscience-inspired multi-task method to quantify working memory overload. It reveals a novel attack exploiting this overload to bypass safety mechanisms, achieving high success rates and transferability across models.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"A novel working memory attack that bypasses safety mechanisms on SOTA LLMs\", \"review\": [\"This paper explores the vulnerabilities of LLMs within the scope of in-context learning. The authors develop a novel multi-task methodology inspired from neuroscience to systematically measure and quantify working memory overload in LLMs. They present a new attack vector that exploits working memory overload to bypass safety mechanisms in state-of-the-art LLMs, achieving high attack success rates across multiple models. They also demonstrate the transferability of these attacks, showing that higher-capability LLMs can be used to craft working memory overload attacks targeting other models.\", \"**Clarity:** The paper is generally well-written and easy to follow. The authors provide clear explanations of their methodology and findings. However, some sections could benefit from more detailed explanations, especially those describing the parallels to neuroscience as this is not a field most readers in this field may be familiar with.\", \"**Strengths:**\", \"The paper introduces a novel and interesting approach to studying the vulnerabilities of LLMs.\", \"The use of a multi-task methodology inspired by neuroscience is a creative and effective way to measure the working memory limitations of LLMs.\", \"The paper provides empirical evidence that working memory overload can be used to bypass safety mechanisms in LLMs.\", \"The authors demonstrate the transferability of their attacks, which highlights the potential for these attacks to be used in the wild.\", \"**Weaknesses:**\", \"The paper primarily focuses on bypassing specific safety mechanisms. A broader exploration of various safety protocols would enhance the paper's impact.\", \"The methodology's reliance on specific task types might limit its generalizability. Further validation with diverse tasks would be beneficial.\", \"While the attack is shown to be effective, a more in-depth explanation of the underlying mechanism would improve understanding and reproducibility\", \"**Areas of Improvement:**\", \"Discuss potential defenses and mitigations against the proposed attack.\", \"Include a more thorough analysis of the computational resources required for the attack.\", \"Provide more detailed explanations of the neuroscience concepts that they draw upon.\", \"Discuss the potential implications of their findings for the development of more robust and secure LLMs.\", \"Overall, the paper introduces a novel new attack that is effective on state-of-the-art LLMs by attacking the working memory space and bypassing safety mechanisms. The authors should address the weaknesses and limitation of this attack along with having further discussions on the implication and potential next steps.\"], \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Insightful Study of Cognitive Limitations in Language Models\", \"review\": \"**Summary of claims for contribution**\\n\\nThe paper contributes with a new attack vector that exploits a vulnerability in LLMs which the authors call \\u201cworking memory overload\\u201d. This way of attacking LLMs degrades model performance and bypasses safety mechanisms. They demonstrate how this attack can be effective across models and develop an automated attack algorithm to evaluate the safety alignment of different LLMs. \\n\\n**List strong and weak points of the paper**\\n\\nThe strong points of the paper is that they demonstrate a successful attack that significantly reduces the capability or safety alignment of state of the art LLMs. The evaluations of the attacks are fairly comprehensive, spanning multiple task types and models, thus showing that it is transferrable. They use a well structured experimental methodology to measure and quantify memory overload. Lastly, connecting this approach to cognitive science provides opportunities for new research directions.\", \"some_of_the_weaker_points_include\": [\"Limited theoretical explanation for why this vulnerability exists in transformer architectures\", \"Lack of justification for why this is analogous to human working memory, the reason LLMs underperform on these tasks can be due to other factors\", \"Minimal exploration of potential defences against these attacks\", \"There could have been more exploration on which specific aspects of the attack are most critical to its success and which models are most vulnerable and why (more ablation studies)\"], \"recommendation\": \"Accept\\n\\nThe paper introduces a novel attack vector for LLMs that has very high success rate and exposes a vulnerability that needs further study. It produces strong empirical evidence and covers multiple LLMs, showing that it transfers across models. These types of attacks are of great relevance for AI safety and need further attention.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Review: Working Memory Attack on LLMs\", \"review\": \"The paper \\\"Working Memory Attack on LLMs\\\" applies ideas about working memory from neuroscience to expose a vulnerability in LLM reasoning. The authors show that progressively increasing task-irrelevant token generation before a final focus \\\"observation task\\\" significantly degrades model performance, providing a series of prompts, with sufficient justification for their choice. This attack vector is said to exploit working memory overload to bypass safety mechanisms in state-of-the-art LLMs. The authors provide a great visual example of this, promting the LLMs to generate visualization generation code in python using tikz, which is a nice touch.\\n\\nThe quality of this research is high, with a well-structured methodology. The authors do focus on the same sequence of memory overload attacks, and it would be nice to see experiments on other types of working memory overloads too. The authors provide sufficient justification for their choices in the appendix which is great. The authors provide a thorough statistical analysis to support their findings, including paired t-tests.\\n\\nThe clarity of the text is acceptable, and the highlight was the visual example. There seemed to be redundant repeated phrasing but this is a non issue, but might be a consideration for the camera ready version. A bigger issue is potential logic issues stating that degraded reasoning implied working memory limitations in LLMs. Working memory had not been well defined enough to claim that the sequence of prompts indeed affected the working memory of the LLM. It would be nice to see more rigour here. Additionally, it would be nice if the authors attempted to find the minimal working memory attack to degrade performance, but that can be left to future work.\\n\\nThe research is highly original, considering the concept of working memory overload from neuroscience as an attack vector. The authors not only identify a new attack vector but also provide empirical evidence that this vulnerability is widespread across different SOTA LLM architectures. The potential for these attacks to transfer between models is particularly concerning and underscores the need for new safety mechanisms.\\n\\nThe paper lacks a detailed discussion on potential countermeasures against working memory overload attacks and the use of judge LLMs to assess the safety of outputs might introduce biases. The latter is a concern but previous results support the general results anyways. The authors do not discuss the practicality of these attacks: the method of progressively increasing complexity in prompts may not be practical in all scenarios. It would be nice to see a minimal sufficient working memory overload attack in this case.\", \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
I8BOtOPcOv | Do Multilingual LLMs Think In English? | [
"Lisa Schut",
"Yarin Gal",
"Sebastian Farquhar"
] | Large language models (LLMs) have multilingual capabilities and can solve tasks across various languages. However, we show that current LLMs make key decisions in a representation space closest to English, regardless of their input and output languages. Exploring internal representations with a logit lens for sentences in French, German, Dutch, and Mandarin we show that the LLM first emits representations close to English for semantically-loaded words before translating them into the target language. We further show that activation steering works better for these LLMs when the steering vectors are computed in English than in the language of the inputs and outputs. This suggests that multilingual LLMs perform key reasoning steps in a representation that is heavily shaped by English in a way that is not transparent to system users. | [
"Do Multilingual LLMs Think In English?"
] | Accept | https://openreview.net/pdf?id=I8BOtOPcOv | https://openreview.net/forum?id=I8BOtOPcOv | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"bDATbRWPFs",
"SSmLXqndSn",
"ENYcHN4zxT",
"A1Mly1nFpZ"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740907310709,
1740549127803,
1740957613069,
1741109708671
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission71/Reviewer_oGNM"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission71/Reviewer_FhRa"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission71/Reviewer_3cF1"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Good paper, accept\", \"review\": \"This work investigates language models from the perspective of pretraining data language composition. Given the dominance of English in current research, it specifically focuses on the English-centric characteristics of LLMs. It uses interp tools to test out three hypotheses about the relation between the languages, providing insights into the behaviors of LLMs and their implications on fairness and performance.\", \"pros\": [\"Robust experimental design: Employs a comprehensive approach using logit lens, vector steering and causal tracing to dissect the internal representations of LLMs,\", \"Model Selection: They analyse four open-weights models (Llama, Gemma, Mixtral, and Aya), which vary in architecture and language coverage. Models like Aya are more multilingual than the Gemma models based on their pretraining data. The paper's results agree with this.\", \"The methods and dataset are transparent and seem sound, providing a decent foundation for future work. Interesting findings like English steering vectors being more effective definitely open up research directions.\", \"Multilingual models are the next paradigm of language modeling and this is a meaningful contribution to making them more understandable and powerful\"], \"improvements\": [\"Limited language coverage: All the languages are high resource, might have been interesting to see how a low-resource language benefits or gets impacted (perhaps giving meaningful results towards testing out whether low-resource language can benefit directly from high-res ones in the same model).\", \"They mention tokenization as limitation (and this could also explain why they do not include low-resource languages, which obviously have higher fertility and issues with token embedding to word embedding conversions), but do not address it in detail\", \"Could benefit from more exhaustive analyses and from better contextualization of real-world implications of the results\"], \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"title\": \"Finds more evidence for an decently well backed hypothesis - that LLMs have English-centered concept spaces - with sound experiments\", \"review\": \"This paper provides further evidence of English centered concept spaces in LLMs predominantly pre-trained on English.\\n\\n**Quality**\", \"pros\": [\"Abstract and introduction are clear and concise, describes paper well\", \"Figures offer clear illustration of results\", \"Cites relevant work\", \"The breakdown between different elements of diction is interesting!\"], \"cons\": \"- The main body of the paper needs more detail on experimental methodology as they are unclear. What data was used to generate steering vectors (example pair), which steering setup was used (CAA?). Briefly, how was the dataset created? Minor issue.\\n- Figures need clear descriptions, for example, figure 5 was never referenced in main text\\n\\n**Originality**\\n\\nI'm not confident in this evaluation as I'm not super familiar with this line of study, but it seems that similar experiments have been done in various cited work that support the same hypotheses. The only novel contribution is studying latent representations with multi-token open ended generation, which I don't think contributes *significant* new evidence for the question being studied. \\n\\n**Significance**\\n\\nTo my knowledge, the hypothesis \\\"LLMs \\u2018operate\\u2019 in a space that is English-centric (or centered on the main pretraining language)\\\" already well backed by evidence. So in that respect, I'm unsure of this paper's significance, although it does study the problem in a new setting. The implications of this study are not well formulated, the 'so what?' question is not really answered, arguably because I think this result currently has limited applications...?\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"Do Multilingual LLMs Think In English?\", \"review\": \"This paper explores whether multilingual LLMs process information in English, even when given input in other languages. The authors use different interpretability techniques, such as the logit lens, causal tracing, and activation steering, to analyze how LLMs represent language internally. Their findings suggest that models tend to process information in an English-like way before translating it into the output language.\\n\\nStrengths\", \"clear_research_question\": \"The study investigates an interesting and important issue in multilingual AI.\", \"good_methodology\": \"The authors use multiple models, languages, and analysis methods, making the results more reliable.\", \"strong_results\": \"The findings are well-supported by both quantitative and qualitative evidence.\", \"important_implications\": \"This study highlights biases in multilingual LLMs, which could affect fairness in AI.\\n\\nWeaknesses\", \"limited_language_scope\": \"The paper mainly tests European and Mandarin languages. It would be useful to see results for a more diverse set of languages.\", \"lack_of_dataset_analysis\": \"It does not explore whether this English bias comes from how the models are trained.\", \"no_clear_solutions\": \"The paper does not suggest ways to reduce this bias, which would be helpful for improving multilingual models.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"The reviewers generally find the paper well-executed and methodologically strong, with clear findings and important implications for multilingual AI fairness. However, concerns remain regarding limited language coverage (focusing on high-resource languages), lack of dataset analysis to explain the bias, and insufficient discussion of real-world impact. Additionally, some prior work on multilingual reasoning is not well covered, and the novelty of findings compared to existing literature is questioned.\", \"title\": \"Paper Decision\"}"
]
} |
HtqTDxYIV7 | UniGuard: Towards Universal Safety Guardrails for Jailbreak Attacks on Multimodal Large Language Models | [] | Multimodal large language models (MLLMs) have revolutionized vision-language understanding but remain vulnerable to multimodal jailbreak attacks, where adversarial inputs are meticulously crafted to elicit harmful or inappropriate responses. We propose UniGuard, a novel multimodal safety guardrail that jointly considers the unimodal and cross-modal harmful signals. UniGuard trains a multimodal guardrail to minimize the likelihood of generating harmful responses in a toxic corpus. The guardrail can be seamlessly applied to any input prompt during inference with minimal computational costs. Extensive experiments demonstrate the generalizability of UniGuard across multiple modalities, attack strategies, and multiple state-of-the-art MLLMs, including LLaVA, Gemini Pro, GPT-4o, MiniGPT-4, and InstructBLIP. Notably, this robust defense mechanism maintains the models' overall vision-language understanding capabilities. Our code is available at https://anonymous.4open.science/r/UniGuard/README.md. | [
"Safety",
"Guardrail",
"Social Media",
"Multimodality",
"Multimodal LLMs"
] | Reject | https://openreview.net/pdf?id=HtqTDxYIV7 | https://openreview.net/forum?id=HtqTDxYIV7 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"oUNGmBSBnr",
"oLZfTeMBfz",
"i1bxbkv2E3"
],
"note_type": [
"decision",
"official_review",
"official_review"
],
"note_created": [
1741103088171,
1740571161012,
1740807015854
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission5/Reviewer_vkfC"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission5/Reviewer_LtcZ"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"The paper presents a defense mechanism to protect MLLMs from jailbreak attacks; however, the experiments should be further refined to provide a more comprehensive evaluation of the proposed method.\", \"review\": \"**Summary**\\n\\nThe paper introduces UNIGUARD, a defense mechanism designed to protect multimodal large language models (MLLMs) from jailbreak attacks that exploit vulnerabilities in these models to produce harmful content. UNIGUARD employs multimodal safety guardrails, optimizing image and text inputs to reduce the likelihood of harmful responses. \\n\\n**Weakness**\\n\\n1. The authors evaluate their defense method against a limited set of jailbreak attacks, particularly multimodal ones, such as (1) *Jailbreak in Pieces: Compositional Adversarial Attacks on Multimodal Language Models* and (2) *Visual-RolePlay: Universal Jailbreak Attack on Multimodal Large Language Models via Role-playing Image Character*.\\n\\n2. When the distance constraint \\\\(\\\\epsilon = 64/255\\\\) is used, it is important to assess whether the image guardrail noise exacerbates the model's hallucination tendencies, potentially leading to undesirable effects.\\n\\n3. It is recommended that the authors improve the clarity of the paper and Typo errors (line 134). Specifically, in the transferability experiments shown in Figure 5, the proxy model used should be clearly stated.\\n\\n4. Figure 2, which provides an overview of UniGuard, appears to be missing from the paper.\", \"rating\": \"4\", \"confidence\": \"5\"}",
"{\"title\": \"Review\", \"review\": \"This paper presents a novel defense mechanism that strengthens multimodal large language models (MLLMs) against jailbreak attacks by creating multimodal safety guardrails (image and text) to prevent harmful content. The authors demonstrate its effectiveness across various models, attack types, and modalities, while maintaining the models' vision-language capabilities with minimal performance loss.\", \"pro\": \"The authors conduct a comprehensive set of experiments across multiple attack strategies and model architectures. The trade-off between model safety and performance is carefully analyzed. Despite the robust defense provided by UniGuard, the impact on benign tasks, such as vision-language understanding, is minimal.\", \"con\": \"There are formatting issues in the paper, such as Figure 2 not displaying correctly, which affects the reading experience and understanding of the content. Such formatting problems could hinder the presentation of results during the review process, and they need to be addressed promptly.\\n\\nBased on the experimental results, the simpler Pre-defined Guardrail outperforms the Optimization-based Guardrail in terms of defense performance. However, the paper does not provide an analysis or explanation for this phenomenon. The absence of a discussion on this issue makes the experimental section appear incomplete and lacking depth.\\n\\nThis paper does not demonstrate the effectiveness of the method in defending against structure-based attacks, which are currently a more significant threat in jailbreak scenarios. Given that these attacks pose a greater threat to models, the lack of validation in this area makes the method's applicability in real-world settings seem insufficient.\", \"rating\": \"4\", \"confidence\": \"5\"}"
]
} |
HtJ75I6KDG | AegisLLM: Scaling Agentic Systems for Self-Reflective Defense in LLM Security | [
"Zikui Cai",
"Shayan Shabihi",
"Bang An",
"Zora Che",
"Brian R. Bartoldson",
"Bhavya Kailkhura",
"Tom Goldstein",
"Furong Huang"
] | We introduce AegisLLM, a cooperative multi-agent defense against prompt injection, adversarial manipulation, and information leakage. In AegisLLM, a structured society of autonomous agents — orchestrator, deflector, responder, and evaluator — collaborate (via communication) to ensure safe and compliant LLM outputs, while self-improving over time through prompt optimization. We show that
scaling agentic reasoning system at test-time—both by incorporating additional agent roles and by leveraging automated prompt optimization (such as DSPy)—substantially enhances robustness without compromising model utility. This test-time defense enables real-time adaptability to evolving attacks, without requiring model retraining. Comprehensive evaluations across key threat scenarios,
including unlearning and jailbreaking, demonstrate the effectiveness of AegisLLM. On the WMDP unlearning benchmark, AegisLLM achieves near-perfect unlearning with only 20 training examples and fewer than 300 LM calls. For jailbreaking benchmarks, we achieve 51% improvement compared to the base model on StrongReject, and lower false refusal rate than state-ot-the-art methods on PHTest. Our results highlight the advantages of adaptive, agentic reasoning over static defenses, establishing AegisLLM as a strong runtime alternative to traditional approaches based on model modifications. Our code is available at https://github.com/zikuicai/agentic-safety. | [
"safety",
"agentic system",
"jailbreaking",
"unlearning",
"llm"
] | Accept | https://openreview.net/pdf?id=HtJ75I6KDG | https://openreview.net/forum?id=HtJ75I6KDG | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"wfjSp7IAu5",
"kriXO47ST4",
"KCuLrETeli",
"3kSsVAfQRm"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740897234746,
1741099636009,
1740532352590,
1739895779687
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission118/Reviewer_eoLt"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission118/Reviewer_pHCD"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission118/Reviewer_MEH8"
]
],
"structured_content_str": [
"{\"title\": \"review of submission 118\", \"review\": \"This paper introduces AegisLLM, a framework that applies agentic systems to LLM security. The idea of conducting security autonomously at inference time with LLM based systems is interesting. The agent architecture also makes sense and is effective. The authors present compelling empirical results across unlearning and jailbreaking benchmarks, demonstrating particular effectiveness on the WMDP benchmark where AegisLLM achieves near-perfect unlearning with minimal examples.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"A practical agentic security framework for enhancing LLM security and robustness\", \"review\": [\"This paper introduces AegisLLM, an agentic security framework designed to enhance LLM robustness against various security threats. This framework is structured as a dynamic, cooperative multi-agent system, comprising autonomous agents such as the orchestrator, deflector, responder, and evaluator, each with specialized functions. By leveraging test-time reasoning and iterative coordination, AegisLLM aims to mitigate risks such as prompt injection, adversarial manipulation, and information leakage. The paper demonstrates the scalability of this approach through the incorporation of additional agent roles and automated prompt optimization using DSPy. The evaluations, particularly on unlearning and jailbreaking benchmarks like WMDP, suggest that AegisLLM outperforms static defenses and exhibits adaptive resilience.\", \"**Clarity:** The paper is generally well-written and structured. The concept of agentic security is clearly introduced, and the roles of the different agents are well-defined. The use of DSPy for prompt optimization is also adequately explained. However, some sections could use more detailed explanations of the specific protocols used for agent communication along with the implementation details of the evaluation metrics.\", \"**Strengths:**\", \"The agentic security framework is a novel and promising approach to LLM security.\", \"The framework is shown to be scalable both in terms of adding agent roles and using DSPy for prompt optimization.\", \"The evaluation results are convincing and properly demonstrate the effectiveness of AegisLLM.\", \"**Weaknesses:**\", \"The paper could provide more detailed information on the specific protocols used for agent communication.\", \"More details on the implementation of the evaluation metrics and the experimental setup would be beneficial.\", \"While the results are promising, more evaluations across a wider range of LLMs and threat scenarios would strengthen the generalizability of the findings.\", \"The paper does not provide in depth details about the computational overhead associated with the agentic approach.\", \"**Areas of Improvement:**\", \"Provide more detailed explanations of the agent communication protocols and implementation details.\", \"Expand the evaluation to include a wider range of LLMs and threat scenarios.\", \"Include a more detailed analysis of the computational cost associated with AegisLLM.\", \"Add more analysis of the limitations of the agentic system.\", \"Overall, this paper a novel agentic security framework to enhance LLM security against various threats and is shown to be effective in multiple scenarios. The authors should discuss further on the limitations that this framework and its generalizability to other threat scenarios.\"], \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Modular framework for LLM unlearning and safety, missing ablations & comparisons with baselines\", \"review\": \"This work proposes AegisLLM, a modularized setup for increasing LLM safety and unlearning performance.\", \"it_consists_of_four_components\": \"an _orchestrator_ (similar to an input filter), which decides whether a prompt is deemed safe enough to be passed to the _responder_, or whether it should be refused by the _deflector_. If the responder is selected, a fourth component, called _evaluator_ (similar to an output filter), verifies the output of the model, optionally sending it back to the _orchestrator_ for refusal or re-evaluation.\\nEach component has a specific task and tailored prompt, which is optimized using the DSPy framework.\", \"pros\": [\"defense in depth is a reasonable approach for safety\", \"modular approach enables separation of concerns and separate optimization of each component\", \"overrefusal-safety trade-off was comprehensively evaluated\", \"good performance on TOFU\"], \"cons\": \"- similar frameworks [1-5] were proposed before but are not compared against, making it difficult to assess whether the presented setup is truly effective\\n- very similar jailbreaking performance as just filtering model outputs with a judge model\\n- no ablations of the architecture (e.g. what's the impact of excluding the evaluator/orchestrator/deflector,...?), making it difficult to understand which components are significant.\\n- no information on runtime overhead is given\\n- unclear how well the components' prompts generalize to other settings (not a huge issue, since they can be dynamically updated)\\n\\nThere are a few minor formatting issues (e.g. in section 4), mostly related to citations, which should be parenthesized when not used as a subject. \\n\\n[1] Han, Shanshan, et al. \\\"TorchOpera: A Compound AI System for LLM Safety.\\\" arXiv preprint arXiv:2406.10847 (2024).\\n\\n[2] Li, Yuhui, et al. \\\"Rain: Your language models can align themselves without finetuning.\\\" arXiv preprint arXiv:2309.07124 (2023).\\n\\n[3] Wang, Xunguang, et al. \\\"SelfDefend: LLMs Can Defend Themselves against Jailbreaking in a Practical Manner.\\\" arXiv preprint arXiv:2406.05498 (2024).\\n\\n[4] Zeng, Yifan, et al. \\\"Autodefense: Multi-agent llm defense against jailbreak attacks.\\\" arXiv preprint arXiv:2403.04783 (2024).\\n\\n[5] Phute, Mansi, et al. \\\"LLM self defense: By self examination, llms know they are being tricked.\\\" arXiv preprint arXiv:2308.07308 (2023).\", \"rating\": \"5\", \"confidence\": \"3\"}"
]
} |
Gonca78Bwq | GASP: Efficient Black-Box Generation of Adversarial Suffixes for Jailbreaking LLMs | [
"Advik Raj Basani",
"Xiao Zhang"
] | LLMs have demonstrated remarkable capabilities but remain highly susceptible to adversarial prompts despite extensive efforts for safety alignment, raising serious security concerns for their real-world adoptions. Existing jailbreak attacks rely on manual heuristics or computationally expensive optimization techniques, both struggling with generalization and efficiency. In this paper, we introduce GASP, a novel black-box attack framework that leverages latent Bayesian optimization to generate human-readable adversarial suffixes. Unlike prior methods, GASP efficiently explores continuous embedding spaces, optimizing for strong adversarial suffixes while preserving prompt coherence. We evaluate our method across multiple LLMs, showing its ability to produce natural and effective jailbreak prompts. Compared with alternatives, GASP significantly improves attack success rates and reduces computation costs, offering a scalable approach for red-teaming LLMs. | [
"LLM Safety",
"Jailbreak Attacks",
"Adversarial Vulnerability"
] | Accept | https://openreview.net/pdf?id=Gonca78Bwq | https://openreview.net/forum?id=Gonca78Bwq | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"uCKiVEXZCS",
"s3f7I7Hubo",
"oRxiDHFgHu",
"IOveMdfuJW"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740908245352,
1740721361028,
1740308290182,
1741056502279
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission87/Reviewer_AQbh"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission87/Reviewer_2XnX"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission87/Reviewer_eyVt"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Official Review of Submission87 by Reviewer AQbh\", \"review\": \"Summary:\\n\\nThis paper presents a model for efficiently generating jailbreak attacks to existing LLMs. The model, GASP, relies on adding adversarial suffixes to prompts, and is pre-trained on a set of adversarial suffixes and refined with odds-ratio preference optimization. Experiments show that GASP can compromise language models more frequently than other jailbreaking approaches and produces more readable prompts. GASP is also more efficient than some other jailbreaking frameworks.\", \"pros\": [\"The objective of creating readable prompts is interesting and an aspect that other automated frameworks for jailbreaking language models can miss. An adversarial prompt could be much more dangerous if it is difficult to detect by existing safety mechanisms.\", \"The paper is well-organized and concise.\"], \"cons\": [\"This paper focuses on jailbreaking through adversarial suffixes, instead of jailbreak attacks in general. While GASP may be effective in compromising current language models, its lack of coverage may limit its usefulness in evaluating new defenses' ability to deal with jailbreak attacks.\"], \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Accept, novel & impactful but with some key details inaccessible\", \"review\": \"This paper introduces GASP, a framework for generating adversarial suffixes that, when appended to harmful prompts, cause a target black-box LLM to produce harmful outputs. To improve efficiency over existing methods, GASP optimizes directly in the continuous embedding space rather than over discrete token sequences. It also incorporates coherence constraints to ensure that the adversarial suffixes are natural-sounding.\\n\\nThe discussion on the shortcomings of existing jailbreaking approaches is convincing -- they effectively outline the efficiency constraints of manually designed adversarial prompts vs. the incoherence and computational cost of optimization-based methods. \\n\\nTheir method consists of two main steps. The first, pretraining their SuffixLLM, is well-detailed and clearly explained, with a well-defined objective function. The second step\\u2014alignment and refinement using LBO and ORPO\\u2014shows promise but lacks crucial details in the main text. For instance, a brief explanation of GASPEval (which is detailed in the Appendix) would greatly improve comprehension of the LBO and ORPO refinement steps. Here, another clarification would be helpful: how is $p_{nat}(.)$ determined -- based on which probability distribution? \\n\\nThe experiments and results are clearly presented in Table 1. One question: Why are there \\\"-\\\" marks for the ASR@1 scores of the first two rows (GCG and AutoDan)? The ablation study is helpful in highlighting the usefulness of the continuous embedding space exploration. \\n\\nAccept, because the paper introduces a novel and effective framework for generating adversarial suffixes, demonstrating strong empirical results and offering a scalable solution for red-teaming LLMs. The improved readability and efficiency of GASP make it a valuable contribution to the field, even though the current organization makes the paper less accessible than it could be.\", \"minor_comments\": \"Appendix I, Algorithm 1 - line 946, the first mention of TargetLLM should be SuffixLLM\", \"rating\": \"8\", \"confidence\": \"2\"}",
"{\"title\": \"Accept\", \"review\": [\"The paper introduces a new blackbox (i.e. does not require access to the model gradients or any internals) method for generating adversarial suffixes for jailbreaking LLMs, which according to their experiments outperforms all existing methods.\", \"The key (original) components of the design for finding \\u201coptimal adversarial prompts\\u201d seem to be:\", \"A *constrained* optimization, to maximize the probability of an adverserial response while adhering to the constraint that the prompt is natural based on some distribution $p_{\\\\mathrm{nat}}$ that measures the probability that some text is \\u201cnatural\\u201d.\", \"Optimization is done through the embedding space via \\u201cLatent Bayesian Optimization\\u201d (as opposed to e.g. greedy co-ordinate gradient style techniques that optimize on the token space directly)\", \"This optimization trains the parameters of a certain `SuffixLLM` which produces the optimal suffixes to adversarial prompts to elicit \\u201charmful responses\\u201d \\u2014 looking at Appendix I, this is done via ORPO.\", \"Evaluation of responses is done via another LLM `JudgeLLM`, as part of the `GASPEval` procedure, which produces the training signal for `SuffixLLM`.\", \"Some things I didn\\u2019t understand or am not sure about:\", \"Re: the AdvSuffixes dataset \\u2014 if I understand correctly, these are suffixes which adhere to the naturalness constraint, but are not actually successful at jailbreaking the LLM? Then what\\u2019s the point of using these suffixes rather than any random ones \\u2014 is the idea that the probability of an adversarial response is higher for AdvSuffixes than for some random prompts?\", \"The paper claims their approach is totally blackbox and doesn\\u2019t even depend on the logits of the target LLM. But there is constant reference to the target LLM as a probabilistic model i.e. $p(y\\\\mid x)$ etc. \\u2014 are these not the (exponential of the) logits? How else do you model the probability distribution?\", \"Perhaps these need to be explained better, or perhaps I\\u2019m just being dumb.\", \"Regardless, the paper represents a valuable contribution to the field and is very relevant to the workshop topic. It should be accepted.\"], \"rating\": \"8\", \"confidence\": \"2\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
GlmqRQsCaI | ASIDE: Architectural Separation of Instructions and Data in Language Models | [
"Egor Zverev",
"Evgenii Kortukov",
"Alexander Panfilov",
"Soroush Tabesh",
"Sebastian Lapuschkin",
"Wojciech Samek",
"Christoph H. Lampert"
] | Despite their remarkable performance, large language models lack elementary safety features, and this makes them susceptible to numerous malicious attacks. In particular, previous work has identified the absence of an intrinsic separation between instructions and data as a root cause for the success of prompt injection attacks. In this work, we propose an architectural change, ASIDE, that allows the model to clearly separate between instructions and data by using separate embeddings for them. Specifically, the data embedding is initialized with a rotation of the
pretrained model’s embedding, prompting the model to learn to treat instructions and data differently. We demonstrate the effectiveness of our method by showing (1) greatly increased instruction-data separation scores without a loss in model capabilities and (2) competitive results on prompt injection benchmarks, even without dedicated safety training. Additionally, we study the working mechanism behind our method through an analysis of model representations. | [
"Instruction-data separation",
"LLM Safety",
"ML Safety",
"Prompt Injections",
"LLMs"
] | Accept | https://openreview.net/pdf?id=GlmqRQsCaI | https://openreview.net/forum?id=GlmqRQsCaI | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"gsElXwt9Ah",
"W7JEmWKrcg",
"P96bBpSDe6",
"JSUC7ZtwYN"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741055897560,
1740204171970,
1740856037642,
1740868786985
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission75/Reviewer_yo2E"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission75/Reviewer_TNNo"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission75/Reviewer_rFci"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Reviews from yo2E\", \"review\": \"The topic of this paper is aligned with the goal of this workshop: as it focus on the one of the challenges that may cause the prompt injection attacks, i.e., intrinsic separation between instructions and data.\\n\\nThe paper propose an \\\"architectural\\\" change to purposely separate instructions and data using different embeddings, and show the competitive results on prompt injection benchmarks without dedicated safety training. The idea seems to be very straight-forward and easy to scale-up (as the proposed method only requires a different size of embedding layer for instruction and an adapted tokenizer, which don't require to retrain the PLMs from scratch).\", \"several_concerns\": \"(1) the paper relies on a strong assumption of perfect instruction-data classification, however, in real-world settings, a token could function as both instruction and data (e.g., translate hello to french, where it can be considered as a instruction but also cover the required data), I'm curious about how the paper's solution can be adapted to these cases or authors may hold different opinions.\\n\\n(2) there might be limited exploration of the fine-tuning alternatives. The paper might consider more alternatives like parameter-efficient tuning (LoRA, adapters) or other training-free approaches, which could increase the feasibility of applying ASIDE to larger, more complex models.\", \"rating\": \"9\", \"confidence\": \"4\"}",
"{\"title\": \"ASIDE introduces a novel embedding-based instruction-data separation technique to mitigate prompt injection but lacks evaluation against diverse adversarial strategies and multi-turn attacks. While effective in improving security, it requires more diverse prompt injection benchmark evaluations to assess real-world robustness. Rating: Clear Accept (Top 50%).\", \"review\": \"### Summary\\n\\nThe ASIDE paper introduces an architectural modification for LLMs that enforces instruction-data separation to mitigate prompt injection attacks. It achieves this by using separate embeddings for instructions and data, with data embeddings rotated to prevent execution of injected commands. ASIDE significantly reduces attack success rates (ASR) while maintaining instruction-following performance and can be integrated into existing models with minimal fine-tuning.\\n\\n### Strengths\\n\\nASIDE introduces a novel architectural modification that explicitly distinguishes executable instructions from user data, preventing adversarial prompt injections at the embedding level.\\nASIDE can be applied post-hoc to pre-trained models with minimal fine-tuning, making it a practical and computationally efficient solution.\\nDespite improving security, ASIDE maintains strong instruction-following capabilities,\\n\\n### Weaknesses\\n\\nThe success criteria for prompt injection attacks in ASIDE\\u2019s evaluation are overly simplistic, making it easier to defend against structured attacks while ignoring more diverse adversarial strategies. In TensorTrust, ASR is high if the model outputs \\\"Access Granted\\\" after adversarial manipulation; in Gandalf, ASR is high if the model leaks the password \\\"PLANETARY\\\"; and in Purple, ASR is high if the model outputs \\\"purple\\\", despite explicit instructions not to do so. While these benchmarks test basic prompt injection vulnerabilities, they lack attack diversity, as they only evaluate simple rule-breaking scenarios without considering adaptive adversarial techniques, multi-turn attacks, or stealthy manipulations. To improve robustness testing, more diverse benchmarks should be incorporated, such as Microsoft BIPIA, which evaluates adversarially optimized indirect prompt injections, WILDGUARD (AllenAI), which focuses on stealthy adversarial manipulations in real-world LLM applications, and HackPrompt, which tests jailbreak techniques and adversarial red teaming prompts. More advanced and diverse benchmarks are necessary to accurately measure ASIDE\\u2019s ability to resist prompt injection attacks in real-world scenarios.\", \"microsoft_bipia_https\": \"//github.com/microsoft/BIPIA\", \"wildguard_https\": \"//huggingface.co/datasets/allenai/wildguardmix\", \"hackprompt_https\": \"//huggingface.co/datasets/hackaprompt/hackaprompt-dataset.\\n\\nHow well does ASIDE handle multi-turn prompt injections, long context and nested instructions?\\n\\nNot clear if a different rotation would be more or less effective. Also, Is there any other best way to achieve separation apart from rotation? \\n \\nFurthermore, ASIDE can be integrated into already existing language models with minor overhead. There is no concrete discussion about overhead.\\n\\n\\n\\n### Clarifications that did not affect score\\n\\nWhy is high temperature of 0.7 used for evals?\", \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"title\": \"Assessment of ASIDE: A Framework for Instruction-Data Separation in LMs with Mixed Results on Prompt Injection Benchmarks\", \"review\": [\"This paper introduces a framework - ASIDE - for working with LMs to help separate instructions from data. The idea is to use 2 separate token embeddings \\u2013 (1) executable instructions and (2) for tokens in non-executable data. The embedding is initialised as a rotated version of the instruction embedding, which helps the model learn to process instructions and data differently. This is tested through evals on instruction-data separated metrics and prompt injection benchmarks.\", \"Questions\", \"How does the computational cost of ASIDE compare to standard models? Does the double embedding size significantly impact inference speed or memory requirements?\", \"Have you explored different rotation angles beyond 90 degrees? Is there an optimal angle for instruction-data separation?\", \"How does ASIDE perform when combined with specialized safety training or adversarial examples? Could this further improve robustness?\", \"How does the model behave with more complex hierarchies beyond the binary instruction/data distinction? Could this approach be extended to handle multiple privilege levels?\", \"Have you tested ASIDE on other model architectures beyond Llama, such as Mistral or other transformer variants?\", \"Doubts\", \"Not sure if their training approach is comprehensive enough - they only used standard Alpaca data\", \"The results on some prompt injection tests were mixed (like in Table 2 for Completion attacks)\", \"Didn\\u2019t see much comparison with other security approaches\", \"There's no analysis of how much extra computation this requires\", \"I wonder if this approach would work on non-Llama architectures\"], \"rating\": \"6\", \"confidence\": \"2\"}"
]
} |
GhU5J9JqlS | What is the chance of being so unfair? | [] | Fairness has often been seen as an ethical concern that needs to be considered at some cost on the utility. In contrast, in this work, we formulate fairness, and especially fairness in ranking, as a way to avoid unjust biases and provide a more accurate ranking that results in improvement on the actual unbiased utility. With this in mind, we design a fairness measure that, instead of blindly forcing some approximate equality constraint, checks if the outcome is plausible in a just world. Our fairness measure asks a simple and fundamental statistical question: "What is the chance of observing this outcome in an unbiased world?". If the chance is high enough, the outcome is fair. We provide a dynamic programming algorithm that, given a ranking calculates our fairness measure. Secondly, given a sequence of potentially biased scores, along with the sensitive feature, we provide a fair ranking algorithm based on our fairness measure. Finally, we run some experiments to understand the behavior of our ranking algorithm against other fundamental algorithms. | [
"fairness",
"ranking"
] | Reject | https://openreview.net/pdf?id=GhU5J9JqlS | https://openreview.net/forum?id=GhU5J9JqlS | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"dflghJmyYL",
"ZveJf7H2Zk",
"2gJDuPG7Ez"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740375485053,
1740694281895,
1741078411290
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission19/Reviewer_oBre"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission19/Reviewer_x28q"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review of \\u201cWhat Is the Chance of Being So Unfair?\\u201d\", \"review\": \"**Paper summary:**\\n\\nIn this paper, the authors propose a group fairness metric for ranking applications, based on a statistical difference between probability distributions of utility across different groups. Furthermore, they propose a dynamic programming algorithm to compute this metric, develop a \\u201cfair ranking\\u201d algorithm based on the metric, and conduct experiments to compare it against other \\u201cfair ranking\\u201d algorithms.\\n\\n**Reasons to accept:**\\n\\n1.\\tIt is interesting to approach the problem of group fairness as a statistical problem with each group represented as a probability distribution.\\n\\n**Reasons to reject:**\\n\\n1.\\tAccording to Figure 1, where the authors\\u2019 method does perform better than other methods, the improvement is extremely marginal, which heavily casts into doubt the novelty and utility of their approach compared to existing metrics.\\n2.\\tThere is a lack of motivation of why the author\\u2019s approach should be used over state-of-the-art ones [1], or the specific limitations of other metrics / algorithms that the author\\u2019s approach can address. The authors mention certain other methods are \\u201cblindly forcing some approximate equality constraint\\u201d (lines 14-15), but do not mention why that this existing approach is problematic or limited in any way.\\n3.\\tThere is a lack of theoretical support for the superiority or effectiveness of the proposed metric and algorithms. Theorems 1 and 2 have no associated proof, and the components of the mathematical formulations (e.g. what the fractions in Theorem 1 represent) are not explained in adequate detail.\\n4.\\tThe proposed algorithm is limited by the strict requirement for all partial sets in \\ud835\\udf0f to be \\u03b4-rare. For many practical applications, such a ranking, while possible, may only be able to produce a very suboptimal utility. The authors do not consider if and how this assumption can be relaxed under certain conditions.\\n5.\\tThe authors don\\u2019t experiment with real-world datasets, only synthetic ones. This is problematic because real-world data is often more noisy or complex and does not strictly obey a mathematical distribution. In this way, the authors do not show how the algorithm can perform in practice for various applications.\\n6.\\tThere is no discussion on the real-world utility of proposed metric and algorithm, limitations, or future work.\\n7.\\tThe authors mention an Appendix (line 319), but it is not present in the final submission.\\n\\n**Suggestions for authors:**\\n\\n1.\\tIn much of the paper, the approach is framed using a single example with women and non-women. I would suggest framing it in a more generalizable way, not simply tailoring to a single example.\\n2.\\tI would recommend against using the word \\u201cminority\\u201d (line 62), as the fairness problem can also apply to sets of groups where there is no clear or fixed minority. I would recommend using \\u201cprotected class\\u201d or \\u201csensitive class\\u201d instead.\\n3.\\tI would recommend against using \\u201cunbiased\\u201d (e.g. line 75) \\u2013 it\\u2019s a bit misleading because (to me) it assumes that these scores exist in practice rather than hypothetically. I recommend using something like \\u201ctheoretically unbiased\\u201d instead.\\n\\n[1] M. Zehlike et al. Fairness in ranking: A Survey. 2021.\", \"rating\": \"2\", \"confidence\": \"3\"}",
"{\"title\": \"Out of Scope of the Workshop\", \"review\": \"My review will be very brief, as I only broadly perused the submission. This is because I found that the paper is not at all in scope of the workshop.\\n\\nThe paper proposes a new fairness measure in the context of ranking, and then argues for why this fairness metric is the right way to discuss fairness in ranking. Finally, they propose an algorithm that can turn an 'unfair' ranking into a 'fair' ranking under this metric.\\n\\nThe problem setting in the paper does not incorporate LLMs at any point. The setup is ranking, solved using traditional techniques. The dataset used is tabular. In fact, in a brute force attempt to find some connection, I even 'searched' for the terms 'LLM' or 'language' in the entire paper, finding not even a single occurrence.\\n\\nThis might or might not be a good paper. The review is not a comment on the quality of the paper. However, the paper is completely out of scope of the workshop, and I don't believe the reviewers should be expected to have any further comments about a paper this far away from the actual scope of the workshop.\", \"rating\": \"2\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper introduces a new fairness metric for ranking and proposes an algorithm to transform unfair rankings into fair ones based on this metric. However, it is deemed entirely out of scope for the workshop, as it does not involve LLMs or language-based models.\", \"title\": \"Paper Decision\"}"
]
} |
GYykSL4GG0 | Invisible Traces: Using Hybrid Fingerprinting to identify underlying LLMs in GenAI Apps | [] | Fingerprinting refers to the process of identifying underlying Machine Learning (ML) models of AI Systems, such as Large Language Models (LLMs), by analyzing their unique characteristics or patterns, much like a human fingerprint. The fingerprinting of Large Language Models (LLMs) has become essential for ensuring the security and transparency of AI-integrated applications. While existing methods primarily rely on access to direct interactions with the application to infer model identity, they often fail in real-world scenarios involving multi-agent systems, frequent model updates, and restricted access to model internals. In this paper, we introduce a novel fingerprinting framework designed to address these challenges by integrating static and dynamic fingerprinting techniques. Our approach identifies architectural features and behavioral traits, enabling accurate and robust fingerprinting of LLMs in dynamic environments. We also highlight new threat scenarios where traditional fingerprinting methods are ineffective. Our results highlight the framework's adaptability to diverse scenarios. | [
"LLM Fingerprinting",
"AI Security"
] | Reject | https://openreview.net/pdf?id=GYykSL4GG0 | https://openreview.net/forum?id=GYykSL4GG0 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"z3mzkzfxyo",
"texmZIfTgY",
"oKNLWrkkup",
"2EYF9mdS79"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740751195061,
1741109750497,
1740916526299,
1740436486182
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission72/Reviewer_kFkP"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission72/Reviewer_1oBq"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission72/Reviewer_omri"
]
],
"structured_content_str": [
"{\"title\": \"Invisible Traces: Using Hybrid Fingerprinting to Identify Underlying LLMs in GenAI Apps\", \"review\": \"# Review\\nThis paper introduces a novel hybrid fingerprinting framework that combines static and dynamic fingerprinting techniques to identify underlying Large Language Models (LLMs) in generative AI applications.\\n\\n\\n## Strengths\\n1. **Proposes a hybrid fingerprinting framework**, which effectively combines static and dynamic fingerprinting techniques to identify underlying LLMs in generative AI applications.\\n2. **Comprehensive experiments**\\u00a0and visualizations (e.g., t-SNE plots) demonstrate the effectiveness of the proposed method.\\n\\n## Weaknesses\\nThe framework may require retraining if the underlying models are updated or fine-tuned, which could be a limitation in dynamic environments where models evolve rapidly.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"comment\": \"This paper presents a hybrid fingerprinting framework that combines static and dynamic techniques to identify underlying LLMs in AI applications. While the topic is interesting and timely, it is not directly relevant to the workshop on Building Trust in LLMs, as it focuses more on model identification rather than transparency, alignment, or user trust. Given the low relevance to the workshop and existing methodological concerns, I recommend rejection.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review of paper Invisible Traces: Using Hybrid Fingerprinting to identify underlying LLMs in GenAI Apps\", \"review\": \"The paper categorize LLM fingerprinting into two paradigms: Static Fingerprinting and Dynamic\\nFingerprinting. Then the paper presents a hybrid fingerprinting method combining these two paradigms.\", \"strengths\": [\"Convicing empirical results on the effectiveness of the proposed fingerprinting method.\", \"Since the dynamic fingerprinting does not require access to the original LLM, it may have broader real-world applications, especially when the LLM is unavailable.\"], \"weaknesses_and_suggestions\": [\"How do you set the hyperparameter $\\\\alpha$ in Line 282? It seems that the results are heavily influenced by this hyperparameter. Does this parameter need to be carefully tuned to achieve the optimal results?\", \"The paper would benefit from a discussion on whether a model can still be identified from your fingerprint after fine-tuning.\", \"Please use `\\\\citep{}` instead of `\\\\cite{}` for your citations.\"], \"rating\": \"5\", \"confidence\": \"5\"}",
"{\"title\": \"Official Review\", \"review\": [\"## Summary\", \"The paper proposes a hybrid method for fingerprinting large language models by combining targetted and un-targetted queries. For targetted queries, the paper uses a method from prior work (LLMMap), where the queries are generated by observing the outputs of various LLMs and maximizing the inter-LLM discrepancy. For untargetted queries, the method simply an uses off-the-shelf dataset of LLM interactions and trains a model to distinguish between various LLMs given their responses. The paper argues that targetted queries cannot necessarily be used in several real world scenarios, such as agentic frameworks, combinations of LLMs or dynamically changing LLMs. Hence, one needs fingerprinting methods which do not depend on dynamically generted queries.\", \"## Strengths\", \"The paper addresses a timely problem of identifying LLMs deployed in a larger system, which seems to be the new dominant paradigm after chatbots\", \"The performance of the combined approach is well above the baselines, indicating a promising direction of future research.\", \"## Weaknesses\", \"### Writing.\", \"I don't believe that the paper is well written.\", \"I find the terms `static' and `dynamic' fingerprinting to be confusing. If I understand correctly, static fingerprinting constructs **targetted** fingerprints, while dynamic fingerprinting just uses model traces on **any** interaction between the user and the system. In that sense the former actually has more dynamism.\", \"### Contributions\", \"I also do not fully grasp the contributions of the paper over LLMMap. In my perception, the paper claims that one does not necessarily need tailored fingerprints, and one can combine tailored and non-tailored fingerprints to get better detection? Also, the paper uses ModernBERT as opposed to the custom architecture from LLMMap to classify the models. If these are the main contributions, they do not seem to be very significant, and they are not highlighted well in the paper\", \"It seems like the best results are by combining static and dynamic queries, but this goes against the setting and motivation of the paper where one assumes a dynamic environment.\", \"### Other details\", \"I also am not sure how the static and dynamic approaches are combined - is it an ensemble using the method from line 281 or are the static queries used for the dynamic classifier? Is one of them a better approach than the other? If the former is the case, how was $\\\\alpha$ selected?\", \"Similarly, for a larger number of queries, how are the predictions combined? Is it simply majority voting?\"], \"rating\": \"5\", \"confidence\": \"4\"}"
]
} |
GScy14jUjc | Latent Adversarial Training Improves the Representation of Refusal | [
"Alexandra Abbas",
"Nora Petrova",
"Hélios Lyons",
"Natalia Perez-Campanero"
] | Recent work has shown that language models' refusal behavior is primarily encoded in a single direction in their latent space, making it vulnerable to targeted attacks. While Latent Adversarial Training (LAT) attempts to improve robustness by introducing noise during training, a key question remains: How does this noise-based training affect the underlying representation of refusal behavior? Understanding this encoding is crucial for evaluating LAT's effectiveness and limitations, just as the discovery of linear refusal directions revealed vulnerabilities in traditional supervised safety fine-tuning (SSFT).
Through the analysis of Llama 2 7B, we examine how LAT reorganizes the refusal behavior in the model's latent space compared to SSFT and embedding space adversarial training (AT). By computing activation differences between harmful and harmless instruction pairs and applying Singular Value Decomposition (SVD), we find that LAT significantly alters the refusal representation, concentrating it in the first two SVD components which explain approximately 75% of the activation differences variance—significantly higher than in reference models. This concentrated representation leads to more effective and transferable refusal vectors for ablation attacks: LAT models show improved robustness when attacked with vectors from reference models but become more vulnerable to self-generated vectors compared to SSFT and AT. Our findings suggest that LAT's training perturbations enable a more comprehensive representation of refusal behavior, highlighting both its potential strengths and vulnerabilities for improving model safety. | [
"Latent Adversarial Training",
"refusal behavior",
"refusal direction ablation",
"adversarial robustness"
] | Accept | https://openreview.net/pdf?id=GScy14jUjc | https://openreview.net/forum?id=GScy14jUjc | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"qb6WfXDwgL",
"kT4AvTjRTs",
"cErTpL4shP",
"PtETnBOYYB"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740574294761,
1741050552767,
1740907346134,
1741103838783
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission105/Reviewer_nAPg"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission105/Reviewer_7haj"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission105/Reviewer_4YeK"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Good paper that highlights strengths / weaknesses of latent adversarial training\", \"review\": \"This work aims to understand how latent adversarial training (LAT) affects the representation of refusal behaviour in the LLM. The authors demonstrate that LAT improves refusal representation and provide insights into pros and cons of LAT.\", \"strengths\": [\"The paper is technically solid. The ablation attack and latent space analysis are done according to the recent sota.\", \"The paper discovers an important vulnerability of the LAT: \\u201cLAT\\u2019s superior encoding of refusal behavior, while potentially beneficial for model robustness, also creates a more potent attack vector.\\u201d\"], \"weaknesses\": [\"No justification given to why only 14th layer is analyzed, except for the reference to Arditi et al. (2024). Would be good to discuss it, at least in the appendix.\", \"It would be interesting to have results on more models than llama-2, but given the nature of the paper, it\\u2019s a minor weakness.\"], \"please_fix_a_typo\": \"1.1 METHODS should be a section, not subsection.\", \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"title\": \"Interesting findings of how refusal representations change under LAT, but would benefit from more experimental results\", \"review\": \"### Summary\\n\\nThis work shows how Latent Adversarial Training affects the refusal representation in the residual stream of an LLM. This direction is found to be more concentrated (i.e. in the first two SVD components) in the LAT model than in the base model. This results in a refusal direction that better transfers back to the base model, but also makes the LAT model more vulnerable to refusal ablation attacks.\\n\\nMy main criticism is with the experimental results being limited on a single older model (Llama2-7B) which in my experience can exhibit different behaviour to more modern models. While I think the findings and insights are nice, the limited experimental results make this borderline to me. I would recommended acceptance with more comprehensive experimental results (i.e. a conditional acceptance with the inclusion of experiments on several more modern models -- I would leave this to the discretion of the meta-reviewer/ACs). Current models are often even smaller (e.g. Llama3.2-3B-instruct, Phi3.5, etc) and computing refusal vectors is not the most expensive task, so I don\\u2019t think compute should be a major limitation. \\n\\n### Strengths\\n\\n- Interesting findings of how LAT impacts the refusal representation in the residual stream; the result that the LAT model results in a direction that is easier to intervene on is surprising\\n- Transfer results between models are also interesting\\n- Well written and clear, easy to follow\\n\\n### Weaknesses\\n\\n- Limited experimental results make it difficult to tell how robust this phenomenon is. The analysis is strictly limited to Llama2-7B; while I generally object to criticisms that simply complain about running on other models, I do think this is important here because in my experience Llama 2 can behave differently to more recent LLMs, and I believe it\\u2019s important to provide more convincing experimental results.\\n\\n### Questions/Comments\\n\\n- Missing citation for *Efficient Adversarial Training in LLMs with Continuous Attacks* (Xhonneux et al 2024) with regards to \\u201cembeddings AT\\u201d\\n- Why do you think there\\u2019s a difference between LAT and embedding adversarial training? I.e. in Figure 2, why does embedding AT seem to be comparable to the base model, while LAT results in a significant increase in explained variance? Clearly LAT is more likely to directly impact the latent representations in the residual stream, but one would expect continuous attacks in the embedding space should also impact the representations in the residual stream?\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"title\": \"This paper highlights Latent Adversarial Training (LAT) as a novel approach to restructuring refusal behavior in LLMs, making it more robust to external attacks but vulnerable to self-generated ones. While the study provides strong empirical insights, it lacks diverse adversarial evaluations and broader model testing, making it better suited for a Tiny Paper submission rather than a full-length\", \"review\": \"### Summary\\n\\nThis paper investigates Latent Adversarial Training (LAT) as an alternative to traditional safety fine-tuning techniques for improving LLM robustness against prompt injection and refusal manipulation attacks. LAT applies adversarial perturbations in hidden layers rather than in input embeddings, significantly altering how refusal behavior is encoded. The study finds that LAT compresses refusal behavior into fewer latent dimensions, making refusals more structured and more robust to external attacks but also more vulnerable to self-generated attacks. Evaluations using Singular Value Decomposition and ablation attacks on LLaMA-2-7B show that LAT models transfer better across different attack vectors but fail more frequently when attacked with self-generated refusal vectors. These findings highlight both the promise and risks of LAT, suggesting future work should focus on mitigating its self-vulnerability while preserving its robustness advantages.\\n\\n\\n### Strengths\\n\\nThe study finds that LAT enhances robustness against external attacks but increases vulnerability to self-generated attacks, offering a nuanced perspective on its real-world applicability\\n\\nThe study employs Singular Value Decomposition and Principal Component Analysis to analyze how LAT restructures refusal behavior in LLMs .\\n\\nIt provides quantitative evidence that LAT concentrates refusal representations into fewer dimensions, making them more structured and transferable across models.\\n\\n### Weaknesses\\n\\nOne of the key weaknesses of this paper is that its core contribution can be presented concisely without requiring a full-length submission. The primary findings are important but not extensive enough to warrant a long paper.\\n\\nMany prompts in AdvBench explicitly ask for illegal or harmful content. But Modern jailbreak techniques use more indirect approaches. There is a need for more comprehensive evaluation by using more diverse jand real-world jailbreak datasets like hack prompt dataset.\\n\\nThe study only evaluates LLaMA-2-7B, potentially overfitting results to a narrow model class. Additionally, the adversarial suffixes in Advbench were trained and fine-tuned using Vicuna and LLaMA-2-7B-Chat as the primary models and this can bias the results in the paper. Hence, there is a need to conduct experiments on other models like GPT, Claude, Gemini, Mistral etc.\", \"rating\": \"6\", \"confidence\": \"2\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
GHUh9O5Im8 | TRUTH DECAY: Quantifying Multi-Turn Sycophancy in Language Models | [] | Rapid improvements in large language models have unveiled a critical challenge in human-AI interaction: sycophancy. In this context, sycophancy refers to the tendency of models to excessively agree with or flatter users, often at the expense of factual accuracy. While previous studies have primarily analyzed this behavior in single-turn interactions, its persistence and evolution in multi-step conversations remain largely unexplored. We introduce TRUTH DECAY, a benchmark specifically designed to evaluate sycophancy in extended dialogues, where language models must navigate iterative user feedback, challenges, and persuasion. We prompt models to elicit four types of sycophantic biases. We then propose and test sycophancy reduction strategies, evaluating their effectiveness beyond single-step interactions. | [
"LLM",
"Sycophancy",
"Multi-step Dialogue",
"Mimicry",
"Static Feedback",
"Answer Sycophancy"
] | Reject | https://openreview.net/pdf?id=GHUh9O5Im8 | https://openreview.net/forum?id=GHUh9O5Im8 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"Y2yC5f0Xd0",
"QXUaJAglZZ",
"9UPrn8lByg",
"1FN1NKnlxD"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740870372894,
1739706632363,
1740306229173,
1741084889291
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission127/Reviewer_7aun"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission127/Reviewer_DDVF"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission127/Reviewer_xHuN"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"The paper introduces a useful multi-turn sycophancy benchmark but needs better clarity and justification for some claims\", \"review\": [\"## Summary\", \"This paper introduces Truth Decay, a new benchmark designed to evaluate sycophantic tendencies in large language models (LLMs) over multi-turn interactions. The authors argue that existing sycophancy evaluation methods primarily focus on single-turn responses and fail to capture the compounding nature of sycophancy in extended conversations. To address this gap, the paper proposes two evaluation strategies: (1) static feedback-based sycophancy, where models are tested with structured follow-up prompts, and (2) rationale-based sycophancy, where incorrect responses are reinforced with persuasive rationales generated by another LLM. Additionally, the paper evaluates sycophancy mitigation techniques through specific anti-sycophancy prompts. The study assesses Claude Haiku, GPT-4o-mini, and Llama 3.1 8B across these settings, demonstrating how models progressively adopt sycophantic behaviors over multiple turns and struggle to maintain factual accuracy.\", \"## Strengths\", \"Important Problem Addressed: The paper highlights a significant limitation of current LLMs\\u2014their susceptibility to sycophantic behavior, which can compromise reliability in high-stakes applications.\", \"Multi-Turn Evaluation: Unlike existing single-turn sycophancy tests, this study systematically examines how model responses evolve over extended interactions, revealing the compounding nature of sycophancy.\", \"Diverse Testing Approaches: The inclusion of both static follow-ups and rationale-based interventions provides a nuanced perspective on how models internalize and propagate incorrect information.\", \"Discussion on Anchoring Effects: Section 5.4 presents an interesting discussion on how models may exhibit anchoring biases, making it difficult to revert to factual correctness after an initial incorrect response.\", \"## Weaknesses\", \"Writing Clarity and Organization:\", \"The paper uses incorrect citation formatting (e.g., should use `\\\\citep{}` when references are not integral to the sentence) and quoation marks (e.g., should use ``latex quotation marks'')\", \"The abstract does not clearly outline the paper's contributions and key findings.\", \"The introduction effectively presents the problem but does not sufficiently describe the proposed evaluation approach.\", \"The related work section lacks a more explicit comparison with existing multi-turn benchmarking efforts.\", \"Figures are not consistently referenced within the text, making it difficult to follow the discussion.\", \"Limited Evaluation Scope: The experimental results are constrained to three models (Claude Haiku, GPT-4o-mini, Llama 3.1 8B), which may not generalize to other state-of-the-art models with stronger anti-sycophancy safeguards. The authors should also specify which specific version of Claude Haiku was used.\", \"Weak Justification for Some Claims:\", \"Section 5.1 claims that sycophantic tendencies are already present in single-step dialogues and worsen in multi-turn settings but does not clearly indicate which results support this claim.\", \"Some key results supporting the analysis are only in the appendix, without clear references in the main text.\", \"## Overall Evaluation\", \"The paper addresses an important issue and presents a novel approach to studying sycophancy in LLMs beyond single-turn interactions. The multi-turn evaluation paradigm is a valuable addition to existing benchmarks, and the experiments provide useful insights into how sycophancy compounds over repeated interactions. However, the paper would benefit from clearer writing, better justification of claims, and a stronger discussion of its relation to existing multi-turn benchmarks. Additionally, providing a more comprehensive evaluation with a broader range of models would strengthen its contributions. Overall, while the paper presents valuable work, its clarity and rigor need improvement before publication.\"], \"rating\": \"3\", \"confidence\": \"3\"}",
"{\"title\": \"Review of TRUTH DECAY\", \"review\": [\"# Summary\", \"The paper introduces TRUTH DECAY, a multi-turn evaluation benchmark designed to measure sycophancy in Large Language Models (LLMs). It evaluates factual accuracy degradation as models interact with users over multiple conversational turns, using both static follow-up prompts and rationale-based adversarial probes. The evaluation is conducted on TruthfulQA and MMLU-Pro with several models (Claude Haiku, GPT-4o-mini, LLaMA 3.1 8B), showing that sycophancy increases over turns, particularly in subjective domains like philosophy. Two simple mitigation prompts are also tested.\", \"## Strengths\", \"Addresses an important issue in LLM evaluation: factual degradation and sycophantic alignment in multi-turn dialogues.\", \"Methodologically sound, combining existing datasets with a multi-turn evaluation structure.\", \"Useful practical insights on accuracy degradation across domains and the potential for simple prompting strategies to reduce sycophancy.\", \"## Weaknesses\", \"The novelty is somewhat limited, as multi-turn factual degradation has been explored in prior work (e.g., Laban et al., 2024). This work focuses more specifically on sycophancy, which is a valuable extension but not a major conceptual leap.\", \"The benchmark is more of a practical evaluation setup than a foundational benchmark, which is fine for a workshop but should be framed as such.\", \"Results align largely with expectations\\u2014sycophancy increases over turns, and subjective domains degrade more than STEM\\u2014but the findings are still practically useful.\", \"## Suggested Improvements\", \"Clarify the positioning of the benchmark as a practical evaluation pipeline rather than a foundational benchmark.\", \"Acknowledge and situate the work more clearly alongside Laban et al. (2024) and [Scheurer et al.](https://arxiv.org/pdf/2311.07590) to emphasize its contribution as an extension to sycophancy evaluation.\"], \"rating\": \"6\", \"confidence\": \"5\"}",
"{\"title\": \"Review\", \"review\": [\"The paper explores the phenomenon of sycophancy in large language models (LLMs), specifically in multi-turn conversations. It introduces TRUTH DECAY, a benchmark designed to evaluate sycophantic behavior over extended dialogues, where language models must handle iterative user feedback, persuasion, and challenges. The study highlights how LLMs, particularly those trained with reinforcement learning from human feedback (RLHF), can drift toward excessive agreement, sacrificing factual accuracy. The authors test various sycophancy reduction strategies and demonstrate that such biases worsen with multiple exchanges. The paper suggests that current LLMs struggle to maintain objectivity and truthfulness during extended interactions and proposes methods to mitigate these effects in future models.\", \"Strengths\", \"The TRUTH DECAY benchmark provides a structured and in-depth evaluation of sycophantic behavior in multi-turn conversations, a critical aspect of LLM performance.\", \"The study highlights how current LLMs, particularly those trained with RLHF, exhibit sycophantic tendencies, shedding light on important model behavior that needs attention.\", \"The paper not only identifies sycophancy issues but also proposes potential strategies to reduce biases, paving the way for more objective and accurate future models.\", \"Weaknesses\", \"The TRUTH DECAY benchmark may not capture all nuances of sycophantic behavior, potentially overlooking subtler forms of bias in different contexts.\", \"The focus on models trained with reinforcement learning from human feedback may limit the generalizability of the findings to other types of model training approaches.\", \"The proposed sycophancy reduction strategies might not scale effectively to larger, more complex models, potentially requiring significant adjustments or trade-offs in performance.\", \"The citation form is not appropriate and the table captions are not correctly placed above the tables.\"], \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
FLZaztMjja | Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study Over Open-ended Question Answering | [
"Yuan Sui",
"Yufei He",
"Zifeng Ding",
"Bryan Hooi"
] | Recent works integrating Knowledge Graphs (KGs) have led to promising improvements in enhancing the reasoning accuracy of Large Language Models (LLMs). However, current benchmarks focus mainly on closed-ended tasks, leaving a gap in the assessment of more complex real-world scenarios. This gap has also obscured the evaluation of KGs' potential to mitigate the problem of hallucination in LLMs. To fill the gap, we introduce OKGQA, a new benchmark specifically designed to assess LLMs enhanced with KGs under open-ended, real-world question answering scenarios. OKGQA is designed to closely reflect the complexities of practical applications using questions from different types, and incorporates specific metrics to measure both hallucination ratio and the enhancement in reasoning capabilities. To consider the scenario in which KGs may have varying levels of mistakes, we propose another benchmark variant OKGQA-P to assess model performance when the semantics and structure of KGs are deliberately perturbed and contaminated. OKGQA aims to (1) explore whether KGs can make LLMs more trustworthy in an open-ended setting, and (2) conduct a comparative analysis to shed light on method design. We believe that this study can facilitate a more complete performance comparison and encourage continuous improvement in integrating KGs with LLMs to reduce hallucination. | [
"Large Language Model Hallucination",
"Faithful Reasoning",
"Knowledge Graph-based Question Answering"
] | Accept | https://openreview.net/pdf?id=FLZaztMjja | https://openreview.net/forum?id=FLZaztMjja | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"sO5w1VP0YO",
"JbynfRagH9",
"6Bm2zWiY8s"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740957176592,
1740916273703,
1741099441960
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission147/Reviewer_m7Lk"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission147/Reviewer_UP2x"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": \"This work proposes OKGQA, a novel benchmark for evaluating LLM+KG on open-ended question answering tasks. By allowing models to generate paragraph-length natural responses rather than constrained outputs, it enables the use of established hallucination metrics like FActScore and SAFE. The study demonstrates that integrating knowledge graphs reduces hallucination across multiple LLM architectures, with subgraph retrieval methods performing best. The authors furthermore show that KG augmentation remains beneficial even when knowledge sources contain imperfections.\", \"strengths\": [\"This work creates the first open-ended question answering benchmark specifically designed to evaluate hallucination in KG-augmented LLMs.\", \"It tests multiple LLM architectures (GPT-4o, Llama-3.1, Mistral, Gemma) and various KG integration methods.\", \"It employs multiple evaluation metrics (FActScore, SAFE, G-Eval) with validation through human correlation studies\", \"The authors investigated robustness when using imperfect knowledge sources through the OKGQA-P variant, and found the KG still improves the performance.\"], \"weaknesses\": [\"The dataset relies exclusively on DBpedia, limiting generalizability to other knowledge graphs.\", \"Data in the datasets is generated by templates, which may not perfectly represent real-world KG usage.\", \"Limited exploration of how the size and structure of retrieved knowledge affects performance.\", \"Only evaluated one graph retrieval method from the literature and would be beneficial to add more evaluated methods.\"], \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"This paper proposes the OKGQA benchmark (and its variant OKGQA-P) to assess whether integrating knowledge graphs (KGs) can reduce hallucinations in large language models (LLMs) during open-ended question answering. The study introduces a KG-augmented retrieval framework\\u2014with variants based on triplets, paths, and subgraphs\\u2014and evaluates its performance using metrics such as FActScore, SAFE, and LLM-based evaluators.The paper tackles an important problem\\u2014mitigating LLM hallucinations by leveraging external structured knowledge\\u2014but falls short on several fronts.\", \"review\": \"Strengths\\n\\n1.Relevance:\\n The issue of hallucinations in LLMs is critical, and the idea of using KGs for enhanced factuality is timely.\\n\\n2.Empirical Breadth:\\n The authors conduct extensive experiments across multiple retrieval strategies (triplet, path, and subgraph retrieval) and evaluate various LLM backbones, which at least provides a wide empirical scope.\\n\\nWeaknesses\\n\\n1.Lack of Novelty:\\n The proposed KG-augmented framework is a rather straightforward extension of existing retrieval-augmented generation (RAG) paradigms. \\n\\n2.Methodological Shortcomings:\\nThe benchmark's exclusive reliance on DBpedia subgraphs limits its applicability to real-world scenarios by failing to capture the diversity and dynamism of broader knowledge graphs, thereby raising significant concerns regarding the representativeness of the KG data. Moreover, the perturbation methods employed in OKGQA-P to simulate KG noise are superficial and lack rigorous motivation, further undermining the benchmark\\u2019s ability to accurately reflect real-world KG imperfections.\\n\\n3.Clarity and Organization:\\n The manuscript is excessively dense and suffers from poor organization. Critical details\\u2014including hyperparameter choices and the rationale behind them\\u2014are inadequately explained, making the work hard to follow.\\n\\n\\n4.Overemphasis on Empirical Comparison:\\n While the empirical evaluation is extensive, the narrative lacks focus. It is unclear which aspects of KG integration truly drive improvements, and the paper does little to dissect why certain retrieval methods (e.g., subgraph retrieval) outperform others under noisy conditions.\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
FB8FMU99BC | Evaluation of Large Language Models via Coupled Token Generation | [
"Nina L. Corvelo Benz",
"Stratis Tsirtsis",
"Eleni Straitouri",
"Ivi Chatzi",
"Ander Artola Velasco",
"Suhas Thejaswi",
"Manuel Gomez Rodriguez"
] | State of the art large language models rely on randomization to respond to a prompt. As an immediate consequence, a model may respond differently to the same prompt if asked multiple times. In this work, we argue that the evaluation and ranking of large language models should control for the randomization underpinning their functioning. Our starting point is the development of a causal model for coupled autoregressive generation, which allows different large language models to sample responses with the same source of randomness. Building upon our causal model, we first show that, on evaluations based on benchmark datasets, coupled autoregressive generation leads to the same conclusions as vanilla autoregressive generation but using provably fewer samples. However, we further show that, on evaluations based on pairwise comparisons, coupled and vanilla autoregressive generation can surprisingly lead to different rankings when comparing more than two models, even with an infinite amount of samples. This suggests that the apparent advantage of a model over others in existing evaluation protocols may not be genuine but rather confounded by the randomness inherent to the generation process. To illustrate and complement our theoretical results, we conduct experiments with several large language models from the Llama family. We find that, across multiple knowledge areas from the popular MMLU benchmark dataset, coupled autoregressive generation requires up to 40% fewer samples to reach the same conclusions as vanilla autoregressive generation. Further, using data from the LMSYS Chatbot Arena platform, we find that the win-rates derived from pairwise comparisons by a strong large language model to prompts differ under coupled and vanilla autoregressive generation. | [
"LLM Evaluation",
"Machine Learning",
"Causality"
] | Accept | https://openreview.net/pdf?id=FB8FMU99BC | https://openreview.net/forum?id=FB8FMU99BC | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"Os2iHarNuh",
"LXI88AxmHK",
"KhuRiwDUE4"
],
"note_type": [
"decision",
"official_review",
"official_review"
],
"note_created": [
1741182181033,
1739751740875,
1740037306795
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission61/Reviewer_S3Ha"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission61/Reviewer_7LdU"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review of Large Language Models via Coupled Token Generation\", \"review\": \"Summary\\nThe paper introduces a causal model for coupled autoregressive generation, which shares the same randomness across different LLMs to ensure that performance differences are due to model characteristics rather than sampling noise. The authors theoretically demonstrate that this approach reduces the variance of performance estimates, leading to more sample-efficient evaluations and potentially different model rankings in pairwise comparisons. Empirical results on benchmarks MMLU and LMSYS Chatbot Arena further validate that coupled generation provides more reliable and intuitive assessments of LLM performance compared to independent sampling.\\n\\nStrengths \\nApproach is both theoretically and empirically justified. \\nHighlights an important, underexplored source of variability\\u2014sampling randomness\\u2014shows how it can confound model evaluation, and offers a solution.\\n\\nWeaknesses/Questions \\nThe practical details of coupling the noise across models are not thoroughly detailed, leaving some ambiguity in replication. Details could be added to the appendix.\", \"minor_comments\": \"Lines 121, 194 1099, 1101 contains broken ?? equation or section references.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"good evalution method but a bit complex\", \"review\": \"The paper proposes an evaluation method that reduces the randomness in LLMs evaluation through coupled generation. This approach is theoretically innovative, especially in offering a new perspective on addressing the inconsistency in LLM evaluations caused by randomness. The paper demonstrates that coupled generation can reduce the need for samples while maintaining the reliability of evaluation results.\\n\\nHowever, while the causal model provides a framework for understanding randomness, its application in the paper seems somewhat complex. The core idea of sharing random seeds could potentially be explained and implemented in a simpler manner.\\n\\nThe theoretical analysis relies on \\\"canonical settings\\\" such as binary choice questions, single-token responses, and the Gumbel-Max SCM. These settings might be overly simplified and may not adequately represent real-world LLM application scenarios, limiting the generalizability of the results.\\n\\nAdditionally, using GPT-4 as the judge for evaluation seems weak. At a minimum, GPT-4 Turbo would offer better performance for this task.\\n\\nLastly, the causal model's explanation appears unclear at times, potentially hindering readers from fully grasping the paper's proposed method. A clearer and more accessible explanation would benefit the overall presentation.\", \"rating\": \"6\", \"confidence\": \"3\"}"
]
} |
EAFMVwEOun | Underestimated Privacy Risks for Minority Populations in Large Language Model Unlearning | [] | Large Language Models (LLMs) embed sensitive, human-generated data, prompting the need for unlearning methods. Although certified unlearning offers strong privacy guarantees, its restrictive assumptions make it unsuitable for LLMs, giving rise to various heuristic approaches typically assessed through empirical evaluations. These standard evaluations randomly select data for removal, apply unlearning techniques, and use membership inference attacks (MIAs) to compare unlearned models against models retrained without the removed data. However, to ensure robust privacy protections for every data point, it is essential to account for scenarios in which certain data subsets face elevated risks. Prior research suggests that outliers, particularly including data tied to minority groups, often exhibit higher memorization propensity which indicates they may be more difficult to unlearn. Building on these insights, we introduce a complementary, minority-aware evaluation framework to highlight blind spots in existing frameworks. We substantiate our findings with carefully designed experiments, using canaries with personally identifiable information (PII) to represent these minority subsets and demonstrate that they suffer at least 20\% higher privacy leakage across various unlearning methods, MIAs, datasets, and LLM scales. Our proposed minority-aware evaluation framework marks an essential step toward more equitable and comprehensive assessments of LLM unlearning efficacy. | [
"Machine Unlearning",
"Large Language Models",
"Minority Groups"
] | Reject | https://openreview.net/pdf?id=EAFMVwEOun | https://openreview.net/forum?id=EAFMVwEOun | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"cK3oNbKhOU",
"NmZFJ75r7p",
"NluML2vgxk"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740911060628,
1740909727237,
1741182264985
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission28/Reviewer_iooh"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission28/Reviewer_b6Cn"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"The paper highlights the importance of a minority-aware approach in LLM unlearning and proposes a novel evaluation protocol, its findings are undermined by methodological shortcomings, including concerns over dataset validity and the omission of stronger MIA techniques.\", \"review\": \"This work underlines the importance of a minority-aware approach in evaluating LLM unlearning and shows the disproportionate impact of privacy leakage on minority groups. The proposed evaluation protocol addresses this gap, enabling a more comprehensive assessment of unlearning methods. The findings underscore the significance of incorporating noise in unlearning approaches.\\n\\nHowever, the paper has several notable shortcomings:\\n\\n- The use of the ECHR dataset raises questions about the validity of the (MIA) results. If all partitions of the dataset were used, it could lead to significantly better MIA performance due to the temporal shift between train and test splits. Recent literature has highlighted this issue, showing that MIA can work much better under these conditions [1], and even blind models can distinguish members from non-members [2].\\n\\n- The study overlooks stronger, reference-free MIA techniques that could have provided more robust results. Notably, it fails to consider advanced methods such as Min-K%++ [3] and CAMIA (Context-Aware Membership Inference Attack) [4].\\n\\n- The paper in some points does not cite the mentioned methods (e.g. lines 99-100).\\n\\n[1] Zhang, J., Das, D., Kamath, G., & Tram\\u00e8r, F. (2024). Membership inference attacks cannot prove that a model was trained on your data.\\u00a0arXiv preprint arXiv:2409.19798.\\n[2] Das, D., Zhang, J., & Tram\\u00e8r, F. (2024). Blind baselines beat membership inference attacks for foundation models.\\u00a0arXiv preprint arXiv:2406.16201.\\n[3] Zhang, J., Sun, J., Yeats, E., Ouyang, Y., Kuo, M., Zhang, J., ... & Li, H. (2024). Min-k%++: Improved baseline for detecting pre-training data from large language models. arXiv preprint arXiv:2404.02936.\\n[4] Chang, H., Shamsabadi, A. S., Katevas, K., Haddadi, H., & Shokri, R. (2024). Context-aware membership inference attacks against pre-trained large language models. arXiv preprint arXiv:2409.13745.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"An insightful approach to audit the privacy of unlearning algorithms, but lack empirical validation across multiple random seeds\", \"review\": \"Summary:\\n\\nThis paper highlights that auditing the privacy of unlearning algorithms should not be done on any random forgotten samples because it can lead to underestimation of privacy risks, especially on high-risk samples such as those belonging to minority groups. In their experiments on three datasets and two models, the privacy leakage is found to be significantly larger on canary data and minority data, which corroborates their claim. Based on this finding, the paper proposes a minority-aware unlearning evaluation protocol by measuring the worst-case privacy leakage among three forget sets (i.e., random, canary, and minority). Through the proposed evaluation, the paper reveals that only the Langevin Unlearning method can achieve a good privacy-utility trade-off, surpassing other unlearning baselines that do not incorporate noise.\\n\\nOverall, the paper is clearly written with reasonable motivation drawn from the privacy auditing literature and supporting empirical results on three datasets (Enron-Phone, Enron-Email, ECHR-Year) and two LLMs (GPT-2 and Llama-2). Although the results are only reported for a single random seed, I believe this concern can be addressed in the camera-ready version. Therefore, I would recommend acceptance with rating 6.\", \"pros\": \"1.\\u2060 \\u2060The bias in selecting the forget set can greatly influence the performance of unlearning algorithms. Therefore, the problem targeted in this paper is important to ensure that unlearning algorithms and unlearned models are correctly evaluated from the privacy perspective.\\n\\n2.\\u2060 \\u2060The authors did a good job when providing results on many state-of-the-art unlearning algorithms in a controlled setting (same computational budget) to ensure fair comparison. The results from their experiments also have interesting implications when revealing that SOTA algorithms (e.g., SCRUB) are privacy-risky and encouraging algorithms with better privacy guarantees, such as Langevin Unlearning.\", \"cons\": \"1.\\u2060 \\u2060From the empirical part, the results are not reliable enough as they are reported for a single random seed 42. The authors should provide supplementary results on multiple random seeds and their corresponding variance.\\n\\n2.\\u2060 \\u2060Based on Table 2 & 4-6, GA and RL incur less privacy leakage for canaries and minority data. The authors could provide potential explanations and if possible, empirical results to confirm their explanations.\\n\\n3.\\u2060 \\u2060Based on the experiment description in Section 6, the authors used 10k samples for GPT-2 experiments and 50k samples for Llama-2 experiments. I\\u2019m wondering why the authors didn\\u2019t use the same dataset size for both experiments and didn\\u2019t use the entire dataset.\\n\\n4.\\u2060 \\u2060Although I agree that evaluating unlearning algorithms on random subsets can give a false sense of privacy, the idea of measuring worst-case privacy on minority groups is well-known in the privacy auditing literature. Therefore, the novelty of the findings in this paper is limited in my opinion.\", \"suggestions\": \"1.\\u2060 \\u2060The authors should provide supplementary results on other random seeds.\\n\\n2.\\u2060 \\u2060The authors may include results on the full datasets or provide explanations for their choice of dataset resampling.\\n\\n3.\\u2060 \\u2060As the proposed evaluation protocol is evaluated on minority groups, the authors can also discuss the approach to choosing the minority groups in real-world datasets instead of relying on PII datasets.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
DuZoEslwOv | Veracity: An Online, Open-Source Fact-Checking Solution | [] | The proliferation of misinformation poses a significant threat to society, exacerbated by the capabilities of generative AI.
This demo paper introduces Veracity, an open-source AI system designed to empower individuals to combat misinformation through transparent and accessible fact-checking. Veracity leverages the synergy between Large Language Models (LLMs) and web retrieval agents to analyze user-submitted claims and provide grounded veracity assessments with intuitive explanations. Key features include multilingual support, numerical scoring of claim veracity, and an interactive interface inspired by familiar messaging applications. This paper will showcase Veracity's ability to not only detect misinformation but also explain its reasoning, fostering media literacy and promoting a more informed society. | [
"Misinformation",
"Fact-Checking",
"AI for good",
"Trust"
] | Reject | https://openreview.net/pdf?id=DuZoEslwOv | https://openreview.net/forum?id=DuZoEslwOv | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"oXt85JDN7z",
"16wHHsK5zt"
],
"note_type": [
"official_review",
"decision"
],
"note_created": [
1740687338629,
1741058010108
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission91/Reviewer_6yid"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Veracity: An Online, Open-Source Fact-Checking Solution\", \"review\": \"The combination of Large Language Models (LLMs) and information retrieval presents a promising approach to enhancing reliability in fact-checking. While the proposed approach is valuable for LLM applications, greater transparency is needed regarding the factors LLMs consider when assigning reliability scores. While the paper mentions credibility assessments for sources, it does not explain how these scores are calculated\\u2014a more detailed discussion would improve clarity. Additionally, further information on the LLM itself, including its pre-training data size and source (whether it has been self-developed or used open source LLM), would strengthen the paper\\u2019s technical foundation. Providing actual examples illustrating the model\\u2019s accuracy in distinguishing facts would reinforce its practical effectiveness. Similarly, clarifying whether the information retrieval method is compatible with various retrieval techniques would enhance the paper\\u2019s scope. Finally, empirical experiments demonstrating the method\\u2019s performance would significantly improve the study\\u2019s credibility.\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
DAFbrvSQrC | The Differences Between Direct Alignment Algorithms are a Blur | [
"Alexey Gorbatovski",
"Boris Shaposhnikov",
"Viacheslav Sinii",
"Alexey Malakhov",
"Daniil Gavrilov"
] | Direct Alignment Algorithms (DAAs) simplify language model alignment by replacing reinforcement learning (RL) and reward modeling (RM) in Reinforcement Learning from Human Feedback (RLHF) with direct policy optimization. DAAs can be classified by their ranking losses (pairwise vs. pointwise), by the rewards used in those losses (e.g., likelihood ratios of policy and reference policy, or odds ratios), or by whether a Supervised Fine-Tuning (SFT) phase is required (two-stage vs. one-stage). We first show that one-stage methods underperform two-stage methods. To address this, we incorporate an explicit SFT phase and introduce the $\beta$ parameter, controlling the strength of preference optimization, into single-stage ORPO and ASFT. These modifications improve their performance in Alpaca Eval 2 by +$3.46$ (ORPO) and +$8.27$ (ASFT), matching two-stage methods like DPO. Further analysis reveals that the key factor is whether the approach uses pairwise or pointwise objectives, rather than the specific implicit reward or loss function. These results highlight the importance of careful evaluation to avoid premature claims of performance gains or overall superiority in alignment algorithms. | [
"direct alignment algorithms",
"large language models",
"preference optimization"
] | Accept | https://openreview.net/pdf?id=DAFbrvSQrC | https://openreview.net/forum?id=DAFbrvSQrC | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ycWs5W9bbA",
"erWgO4Caxf",
"dJHplBwuwU",
"IW1RW5JmH5"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740207163677,
1740912201917,
1741082374366,
1740995207003
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission17/Reviewer_9QPC"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission17/Reviewer_sHeH"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission17/Reviewer_hSxb"
]
],
"structured_content_str": [
"{\"title\": \"Reviews from 9QPC\", \"review\": \"The paper presents a comparative analysis of Direct Alignment Algorithms (DAAs) for aligning large language models (LLMs) with human preferences, bypassing traditional reinforcement learning (RL) and reward modeling steps used in Reinforcement Learning from Human Feedback (RLHF). It categorizes DAAs by ranking losses (pairwise vs. pointwise), implicit reward types (e.g., likelihood ratios vs. odds ratios), and the presence of a Supervised Fine-Tuning (SFT) phase (one-stage vs. two-stage). The authors enhance one-stage methods like ORPO and ASFT by incorporating an explicit SFT phase and a scaling parameter \\u03b2, demonstrating improved performance. Through theoretical proofs and empirical evaluations, they argue that the distinction between pairwise and pointwise objectives is more critical to alignment quality than the choice of reward formulation, and that even a small SFT dataset can significantly boost performance.\", \"pros\": [\"Improves one-stage methods (ORPO, ASFT) with SFT and \\u03b2, matching two-stage method performance.\", \"Demonstrates pairwise objectives outperform pointwise objectives, guiding DAA design.\", \"Shows 5-10% of SFT data achieves near-optimal alignment, lowering computational costs.\"], \"cons\": [\"The writing could be improved, as there cover too many technical details in the introduction, the authors may consider to smooth their language to make sure readers with limited knowledge of this area can easily follow the paper.\", \"This reliance on a single automated evaluator (gpt-4o) risks skewing the findings, as the results may not reflect genuine improvements in alignment quality. A more reliable evaluation would include human judgments or a mix of automated metrics to ensure fairness and reduce bias, making the conclusions less convincing without such validation.\", \"The paper might want to demonstrate some cases or add some instructions regarding the relevance to the topic of this workshop. As the topic seems to be somehow relevant to the workshop as alignment could be useful for LLM trustfulness, but the relationship is not that clear in this paper which requires further clarification, otherwise, it may be considered out-of-scope for this workshop's target.\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"This paper studies DAAs and shows their relationships and comparative advantages by identifying key factors and their influence on LLM alignment.\", \"review\": \"This paper studies direct alignment algorithms (DAAs), which directly align large language models (LLMs) without explicit reward modeling and then use reinforcement learning for LLM alignment. The authors categorize various DAAs by their use of loss function (pairwise vs. pointwise), reward function (likelihood ratios, odds ratios), and the need for a supervised fine-tuning (SFT) phase (one-stage vs. two-stage). Further, they have shown the relationship between different direct alignment algorithms. Through experiments on Llama models (3B- 8B) and alignment datasets, authors have shown that (1) two-stage methods outperform one-stage methods, such as ORPO and ASFT, (2) pairwise methods outperform pointwise methods, and (3) using a smaller, high-quality subset of the full SFT dataset can be sufficient to achieve the same performance.\", \"the_following_are_the_pros_of_the_paper\": \"1.\\u2060 \\u2060The experiments only used a specific LLama model (e.g., 3B), set of datasets, and benchmarks, which may limit the real-life applications of the results, especially when larger models are used.\\n\\n2.\\u2060 \\u2060Using GPT-based preference judgments may introduce potential biases.\\n\\n\\nThis paper studies DAAs and shows their relationships and comparative advantages by identifying key factors and their influence on performance. The results are indeed the first step towards demystifying the \\\"blur\\\" between different DAAs and provide valuable guidance to improve the alignment of language models using human preference feedback.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"This is a good paper providing both theoretical analysis and empirical evidence to answer a few important research questions in the area of Direct Alignment Algorithms.\", \"review\": \"This paper offers a unified analysis of Direct Alignment Algorithms for language model alignment, combining theoretical and empirical approaches. The authors make a contribution by systematically evaluating one-stage versus two-stage methods, providing clear evidence that incorporating an explicit SFT phase substantially improves performance for methods like ORPO and ASFT.\\n\\nThe introduction of the $\\\\beta$ parameter as a way to control preference optimization strength is particularly insightful, demonstrating how seemingly different methods can be understood within a common framework. Their findings that pairwise methods generally outperform pointwise objectives offers practical guidance for practitioners, while simplifying the decision-making process by showing that this choice is more influential than the specific implicit reward formulation.\\n\\nDespite these strengths, the paper has some limitations. The analysis excludes some competitive algorithms like EXO, which would have provided a more comprehensive picture of the DAA landscape. The work could also benefit from deeper theoretical insights into why pairwise methods consistently outperform pointwise ones.\\n\\nOverall, this work provides actionable guidance for practitioners implementing alignment methods and establishes a useful framework for understanding and improving DAAs. The paper's systematic approach to addressing key research questions makes it a valuable contribution to the field of language model alignment.\", \"rating\": \"8\", \"confidence\": \"4\"}"
]
} |
D8oTSUnEfb | Diagnostic Uncertainty: Teaching Language Models to Describe Open-Ended Uncertainty | [
"Brian Sui",
"Jessy Lin",
"Michelle Li",
"Anca Dragan",
"Dan Klein",
"Jacob Steinhardt"
] | Language models (LMs) often hallucinate. While uncertainty measures like calibration scores provide coarse measures of model uncertainty (e.g. "This proof is 40% likely to be correct"), ideally a model could tell us what it's uncertain about, such as "I don't know how to find the length of side AB," enabling people to understand exactly where to trust a model response.
We propose diagnostic uncertainty: open-ended descriptions of uncertainty that are grounded in model behavior. Our key idea is that a model can be said to be uncertain about X (e.g., "how to find the length of side AB") if its responses significantly improve after being told
X, and X is earliest in its reasoning process.
We implement a method to bootstrap models' ability to generate these diagnostic uncertainty descriptions by iteratively training on sampled descriptions that satisfy these criteria.
To evaluate whether diagnostic descriptions are meaningful, we provide the model with the information it claims to be uncertain about and measure whether its performance improves.
Compared to the descriptions generated by prompting alone, resolving diagnostic uncertainty descriptions leads to 8% higher accuracy and 20% more reduction in entropy of the answer distribution, supporting the hypothesis that diagnostic uncertainty is more faithful to the model's underlying uncertainty.
The main contribution of our work is a framework for operationalizing open-ended uncertainty in LMs, enabling richer ways for people to understand LM behavior beyond raw probabilities. | [
"uncertainty",
"hallucination",
"trust",
"large language model"
] | Accept | https://openreview.net/pdf?id=D8oTSUnEfb | https://openreview.net/forum?id=D8oTSUnEfb | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ubGSzrOtzz",
"fUh6LavScE",
"d6obLtxaE8",
"AdxERPcpHP"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740904512204,
1740938439517,
1741099676387,
1740863221619
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission115/Reviewer_ck23"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission115/Reviewer_UBYp"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission115/Reviewer_bpCH"
]
],
"structured_content_str": [
"{\"title\": \"The authors proposed a novel way in an attempt to to identify the root cause of the response uncertainty of LLMs.\", \"review\": \"**Strengths**\\n\\n- **Research Problem Identification:** \\n The authors proposed a novel way in an attempt to to identify the root cause of the response uncertainty of LLMs. This can be more effective than the vanilla verbalized uncertainty.\\n\\n- **Method:** \\nThe authors proposed a practical method to fine-tune the model to improve its ability to output a more specific uncertain aspect of the task that is more crucial for reducing the response uncertainty and improve its quality.\\n\\n- **Experiments/Results:** \\n The author showed on a math dataset their more nuanced verbalized diagnostic uncertainty is more effective at pinpointing the cause of model's response uncertainty.\\n\\n**Weaknesses**\\n- **Assumption on What Constitutes 'Uncertainty':** \\nThe authors said 'Our key idea is that a model can be said to be uncertain about X if knowing X would improve its responses', but the authors also agree that 'the standard training methods for language models do not incentivize faithfulness by default'. So this is also possible that the model is blindly confident about a false belief, which it will not verbalize and result in false negatives.\\n\\n- **Uncertainty Decomposition:** \\n While the uncertainty that is both critical and root is important, the author may also want to consider other sources of uncertainty (technically every reasoning steps can have some degree of uncertainty). In particular, it is not always the case that addressing the most upstream uncertainty gives rise to the most uncertainty reduction, let alone guaranteeing the elimination of all downstream uncertainties.\\n\\n\\n- **Objective of the Judge Model J:** \\n The authors stated that 'Our goal is to use the judge to select for examples where the model expresses more specific uncertainty when possible', but the end goal is to select the more 'upstream' uncertainty. However, 'more specific' does not necessitate 'more upstream' in many cases. Think of a tree search problem with larger branching factor near the leaf node.\\n\\n- **Root Uncertainty Selection Criteria in the Experiment vs in the Definition:** \\n In practice, it is very challenging to select the root uncertainty strictly according to its counter-factual definition given in line 080, as it requires to check if after addressing the candidate of root uncertainty, address the all the rest of the uncertainties will not lead to improvement for model response. In practice, it can be the case that even after eliminating the $k$ most upstream uncertainties, there will still be residual uncertainty left in the model response. Therefore, there is some degree of inconsistency in how root uncertainty is defined vs how it is checked in the experiment. Unless making this clear, the authors are conflating their own definition of root uncertainty and the notion of upstream uncertainty.\\n\\n- **Lack of Results on Failure Cases Due to Teacher Model:** \\nIn Line 468-475 the authors clearly listed down the cases where the teacher model fails to provide needed feedback to the student model, but in the paper the relevant results were not shown.\\n\\n**Question**\\n\\n- L344: It is not clear to me why the author chose to 'add the selected uncertainties and queries to the training dataset and train from the base model (rather than the fine-tuned M from the previous iteration)'.\\n\\n- L430: It is said 'As seen in figure Figure 6, our model M \\u2032 achieved 5% higher accuracy relative to gpt-4o-mini', but I only saw around 3% improvement on the validation set. I might have misunderstood something here, but I think this part is not written clearly.\", \"rating\": \"6\", \"confidence\": \"5\"}",
"{\"title\": \"Good contribution in using uncertainty to solve\", \"review\": \"This is a good contribution that demonstrates that training models to identify which steps of solving a problem they are most uncertain about improves overall task accuracy. I would encourage the authors to extend to other model families and more different task types.\", \"rating\": \"7\", \"confidence\": \"2\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review of Submission Number 115\", \"review\": \"This paper introduces the notion of \\u2018diagnostic uncertainty\\u2019 - rather than an LLM providing a general uncertainty estimate of its answer, it is more valuable in many settings to obtain specifically the step(s) that it is critically uncertain about - that is, where clarification would have the biggest impact on downstream accuracy - and is also the root - the first such step.\\n\\nTo do this, the authors propose to fine-tune a model to verbalize their uncertainty about such a step, by first asking it to state which step it is most uncertain about over multiple samples, and then using a teacher model equipped with the correct answer to measure in which of those steps positive accuracy improvement is obtained by provision of the information. Furthermore a \\u2018judge model\\u2019 is trained to order these steps so that the root step can be identified. This fine-tuning-and-generation cycle is repeated multiple times.\\n\\nThe authors demonstrate that their method elicits the highest accuracy improvement and entropy reduction over baselines.\\n\\nOverall, I encourage acceptance of this paper as I expect that it will foster significant discussion of interest amongst those working in or adjacent to this area, despite the paper\\u2019s weaknesses enumerated below.\\n\\n## Strengths:\\n\\n1. The paper discusses an interesting extension of an important area in current LLM research - uncertainty calibration/elicitation - into diagnostic uncertainty, which to my knowledge has not hitherto been examined extensively in the literature.\\n2. The paper proposes an interesting conceptual framework in which to understand diagnostic uncertainty.\\n3. The paper is relatively clear to follow - the motivations and baselines considered are reasonable.\\n\\n## Weaknesses:\\n\\n1. Although the motivation in Section 3.1 suggests that Criticality is defined as: \\u2018[clarifying the step\\u2019s uncertainty] determines the downstream computation to solve the problem correctly\\u2019 - this is not the measure used for criticality filtering in the method. Explicitly, the model is asked first to state \\u2018what [it is] most uncertain about\\u2019 which is not precisely the same thing. Moreover, the actual criticality filtering method takes all such queries which improve downstream performance - which is, again, not precisely the same thing - a more close match to the conceptual motivation would be taking all steps which result in 100% accuracy. In general, I encourage the authors to reflect more deeply on precisely what the desiderata are of the diagnostic uncertainty that they wish to extract, and then design a method that maps to that more faithfully; or, if not, to support why they deviate from that with experimental justification or otherwise.\\n2. Similarly, although the conceptual framework\\u2019s description of Rootness is clear, I am not convinced that this translates cleanly into real solutions. In real solutions, it may not be the case that there is a clear notion of \\u2018upstream\\u2019 or \\u2018downstream\\u2019 and these may also be very hard to determine - the analogy to traversal of a directed graph is, I think, only a very fuzzy one in reality. As the authors themselves also point out, rootness may be a feature of sets of nodes rather than a single node - which is the implicit assumption that is being made in the methodology and experiments presented. In practice, the method proposed also relies on a hand-labelled dataset, which is difficult to scale, especially to other more complex tasks. In my view, rootness - though an interesting property - is less important than criticality, and I would suggest that the authors focus most on diagnostic uncertainty specifically for maximising accuracy/performance first.\\n3. As the authors mention in Section 6, there is seemingly no filtering to ensure the teacher model is not providing significantly more information than the requested step\\u2019s clarification, jeopardizing the interpretation of the results.\\n4. For a future conference-level submission, the authors should ensure the results are replicated on a wider set of datasets/tasks, and ideally models as well.\\n\\n## Questions:\\n\\n1. For the judge - it is not clear if the 75% accuracy on the validation set is \\u2018good\\u2019. Is the baseline there 50% - is the score measured over pairwise accuracy, or is it the accuracy of the full ranking by repeated application of the pairwise judge?). \\n2. In lines 301-303, should the upstream labels A and B be the case for s1 and s2, rather than when there is a giveaway query g? If not - I do not understand why the model is not trained on two specific queries - that would seem to be the entire point of the judge?\\n3. How have you ensured that the MATH dataset/task fits into the framework of a single node correction being sufficient for correctness, rather than a set? If this is not ensured, or the case, then what is the repercussion of the assumption the method is making when applied to this problem?\\n4. Why do you retrain from the base model each time (lines 343-345) rather than iterate continually on the latest finetune? I did not spot a justification for this choice.\", \"rating\": \"7\", \"confidence\": \"4\"}"
]
} |
D3feioZDtK | Privacy Auditing for Large Language Models with Natural Identifiers | [] | The privacy auditing for large language models (LLMs) faces significant challenges. Membership inference attacks, once considered a practical privacy auditing tool, are unreliable for pretrained LLMs due to the lack of non-member data from the same distribution as the member data. Exacerbating the situation further, the dataset inference cannot be performed without such a non-member set. Finally, we lack a formal post hoc auditing of training privacy guarantees. Previous differential privacy auditing methods are impractical since they rely on inserting specially crafted canary data *during training*, making audits on already pre-trained LLMs impossible without expensive retraining. This work introduces **natural identifiers (NIDs)** as a novel solution to these challenges. NIDs are structured random strings, such as SSH keys, cryptographic hashes, and shortened URLs, which naturally occur in common LLM training datasets. Their format enables the generation of unlimited additional random strings from the same distribution, which can act as non-members or alternative canaries for audit. Leveraging this property, we show how NIDs support robust evaluation of membership inference attacks, enable dataset inference for any suspect set containing NIDs, and facilitate post hoc privacy auditing without retraining. | [
"LLMs",
"canaries",
"data inference"
] | Reject | https://openreview.net/pdf?id=D3feioZDtK | https://openreview.net/forum?id=D3feioZDtK | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"cXMqtYOCCe",
"A6S1Dq49ys",
"8cGyTWCBZG"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740894465891,
1740851092073,
1741082242991
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission13/Reviewer_QBWy"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission13/Reviewer_KSE7"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review of Proposed Privacy Auditing Approach for LLMs\", \"review\": \"# Summary\\n\\nThe paper suggests \\\"Natural Identifiers\\\" (NIDs) as an alternative solution to address privacy auditing challenges in large language models (LLMs). The authors demonstrate that existing methods of privacy auditing like membership inference attacks (MIAs) and dataset inference have limitations in that they need non-member data to be identically distributed as training data and are reliant on canary data planted during training, thereby posing challenges in auditing pre-trained LLMs.\\n\\nNIDs, as proposed, are randomly generated strings (the authors give some examples like SSH keys, crypto hashes) which should appear by chance in LLM training datasets. The observation is that these NIDs allow for constructing \\\"unbounded additional random strings from the same distribution, which the authors contend can be used as non-members or alternative canaries for audit. This enables robust testing of MIAs, dataset inference on any suspect set with NIDs, and post-hoc privacy auditing without retraining.\\\".\\n\\nThe paper demonstrates the effectiveness of NIDs by measuring MIAs and inferring from datasets for Pythia family of models and Pile dataset and OLMo models. The result is that NIDs facilitate privacy auditing and analysis of privacy risks in LLMs without requiring retraining and generating accurate results.\\n\\n# Quality and Clarity of the Paper\\n\\nGenerally, the paper is well-written and organized. Overall quality seems to be good.\\n\\n\\n# Pros and Cons\\n## Pros\\n\\n1. **Fair Novelty:** The paper introduces a fairly novel concept; utilizing natural identifiers within training sets as both a baseline and as an auditing mechanism post-hoc sounds like an acceptable contribution because the prior work has relied upon injecting synthetic canaries or train/test splits.\\n2. **Practical Applicability:** The proposed approach, tackling the central problem of lacking suitable non-member data for proper MIA testing, is practically applicable in the sense that it avoids expensive retraining of LLMs and can be applied directly to current pre-trained models.\\n\\n## Cons:\\n1. **Distribution Assumption for NIDs:** NIDs in real-world applications are not necessarily random strings because they are part of a context. In the case of hashes, a source hash could be the hash of a trending file, a specific version of software, or even an indexed sensitive document on the internet. Because of this, it could be presented to the LLMs in many textual scenarios during training. Replacing it with an actually random hash shatters these associations and semantic relationships learned. The LLM may have witnessed the original hash in diverse text contexts (security alerts, codebases, threads), and such contexts enrich its understanding of the information being transmitted by that NID. A random substitution lacks this rich context information, yielding a distribution difference that the attacker may be able to exploit or renders the auditing unreliable.\\n2. **Scalability:** Identifying NIDs in massive datasets is computationally expensive. While the paper claims this is easier than retraining, it might still be a significant bottleneck, particularly for real-time monitoring. How does the identification process scale? Are there efficient algorithms for identifying new or less common NID types?\\n\\n3. **Limited Range of Identifiers:** The approach relies on the availability of specific, defined NID forms. Most sensitive data elements in LLM training data are likely not structured identifiers such as names, addresses, and opinions, for which the proposed approach is not found effective.\\n\\n# Minor Errors:\\n- Dictation error for \\\"divers\\\" on line 54.\\n\\n# Questions\\n1. Why can NIDs have meanings in the context in which they occur but truly random strings cannot? (line 194)\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"Paper Review\", \"review\": \"> Summary:\\n\\nThis paper introduces Natural Identifiers as an approach to privacy auditing in large language models, enabling distribution-shift-free membership inference attacks, dataset inference without private validation sets, and post-hoc differential privacy auditing without retraining. \\n\\n> Pros:\\n\\n1.The way of conducting LLM privacy audits is new, using new types of identifiers, which can to some extent address certain challenges in LLM privacy regime.\\n\\n\\n> Cons:\\n\\n1.Some claims are not well-supported by the experiments. For examples, from line 313 - line 317, how can current results support the claim that models with larger training corpus memorize less? \\n\\n2.In crafted examples, are the canaries crafted by only replacing original sample NIDs in sentences with new generated NIDs while keeping the rest part unchanged? I think presentation of methodology settings can be polished to be more clear (an overall framework, or figs of examples).\\n\\n3.The organization of Table 1 is confusing. Why does Table 1 have some first-row entries as dataset names while others are labeled as \\\"Train\\\" and \\\"Average\\\"? Additionally, in the second row, while \\\"Train\\\" and \\\"Test\\\" splits are indicated, multiple dataset names are listed, and the test results are not provided.\\n\\n4.From my perspective, aside from the introduction of the new benchmark framework itself, the paper does not present surprising results or general takeaways that are different from previous literatures.\", \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"comment\": \"The limitations of standard differential privacy auditing techniques are well pointed out by the authors and furthermore, the novelty of the proposed solution is quite significant. However, the reviewers have raised some good questions for the alternative solution to be viable. In particular, identification of NID's in a massive dataset and their limited range are valid points.\", \"title\": \"Paper Decision\"}"
]
} |
CqViN4dQJk | Language Models Use Trigonometry to Do Addition | [
"Subhash Kantamneni",
"Max Tegmark"
] | Mathematical reasoning is an increasingly important indicator of large language model (LLM) capabilities, yet we lack understanding of how LLMs process even simple mathematical tasks. To address this, we reverse engineer how three mid-sized LLMs compute addition. We first discover that numbers are represented in these LLMs as a generalized helix, which is strongly causally implicated for the tasks of addition and subtraction, and is also causally relevant for integer division, multiplication, and modular arithmetic. We then propose that LLMs compute addition by manipulating this generalized helix using the “Clock” algorithm: to solve $a+b$, the helices for $a$ and $b$ are manipulated to produce the $a+b$ answer helix which is then read out to model logits. We model influential MLP outputs, attention head outputs, and even individual neuron preactivations with these helices and verify our understanding with causal interventions. By demonstrating that LLMs represent numbers on a helix and manipulate this helix to perform addition, we present the first representation-level explanation of an LLM's mathematical capability. | [
"LLMs",
"Mechanistic Interpretability",
"Mathematics",
"Reasoning"
] | Accept | https://openreview.net/pdf?id=CqViN4dQJk | https://openreview.net/forum?id=CqViN4dQJk | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"o4QRSoHisW",
"XeaCRcjafD",
"GzKVluxHWV",
"BvTjz7LRJO"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740547148359,
1740610811729,
1741081364495,
1740884257240
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission8/Reviewer_mrkv"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission8/Reviewer_qit3"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission8/Reviewer_uqJH"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": \"Summary\\n\\nThe paper investigates how LLMs perform mathematical operations like addition by reverse-engineering their internal representations and computation mechanisms. The authors analyze three mid-sized LLMs and find that these models represent numbers using a generalized helix structure, which can be manipulated to perform arithmetic operations. They thus connect the LLM mechanism to a \\u201cClock\\u201d algorithm, where numbers are embedded as helices, shifted, and combined in a way similar to a rotating dial, ultimately producing the addition results. Beyond addition, the paper also examines whether the helical representation extends to other mathematical operations such as subtraction, multiplication, integer division, and modular arithmetic.\", \"strengths\": \"The paper presents a novel mechanistic analysis of how LLMs process addition by identifying the helical representation of numbers and the Clock algorithm. Such mechanistic explanations of mathematical operations within LLMs use causal interventions and neuron-level analysis, demonstrating evidence that numerical processing is structured rather than heuristic-based. The analysis was done across multiple models to support the robustness of these findings.\", \"weaknesses\": \"While understanding is an essential step to achieve trustworthiness, what are the concrete and practical applications of this research it's not clear or discussed. Meanwhile, there is one closely related paper that is missing [1].\\n\\n[1] Quirke, P., & Barez, F. (2023). Understanding addition in transformers. arXiv preprint arXiv:2310.13121.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"Review\", \"review\": \"**Paper Summary**\\n\\nThis paper presents an interpretability analysis of how pre-trained language models produce predictions for arithmetic queries, with a focus on addition. The authors begin by examining GPT-J\\u2019s representations of numerical values and note that the residual stream in the first layer encodes both linear and periodic information. They hypothesize that this numerical information can be modeled as a set of circular components sharing a single linear direction, which they term a \\u201cgeneralized helix.\\u201d After fitting the helix model, the authors demonstrate through causal interventions that it explains variations in the model\\u2019s predictions better than certain baselines. The paper also investigates how these representations combine to form the representation of an addition result and analyzes the role of attention heads and MLP neurons in computing the output.\\n\\n**Strengths**\\n\\nThe paper puts together insights from previous works into a more comprehensive picture that better describes how transformer-based language models process arithmetic queries.\\n\\n**Weaknesses**\\n\\n- It is unclear whether the helix model significantly outperforms a simple circular representation. Results in Figures 4 and 16 indicate only a marginal improvement over the simpler circular fit, which has one fewer degree of freedom. The significance of this improvement and the role of the linear component in the helix model require further clarification.\\n- The experimental procedure and the results presented in Section 5.2 are quite confusing. The authors introduce two scores to categorize attention heads without providing numerical values or distributions for these scores. The process appears to involve manual classification of whether a head\\u2019s output is diverted to the logits or suppressed from directly affecting the prediction, and by tuning the number of heads in each category, they claim that 80% of the total effect can be recovered. This approach does not provide convincing evidence regarding the precise role of the attention heads, which remains vague beyond \\u201cmoving information.\\u201d\\n- The overall clarity of the presentation could be improved, for example, by reorganizing the visualizations so that the main text does not rely heavily on references to the Appendix.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"The paper is interesting, liked by all reviewers and provides a nice clear explanation of how numbers are added up by mid-sized LLM's. This work is very relevant to the workshop.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"A very interesting paper in understanding the inner working mechanism of LLM for addition task.\", \"review\": \"This paper provides explanations regarding how LLM performs mathematical reasoning. This paper focuses on the addition task and demonstrates that the LLM uses helix encoding and a \\\"clock\\\" algorithm to perform the addition calculation. Accordingly, this paper also provides detailed modeling on the neural preactivations and analysis on the hidden representations at different layers, leading to a more precise characterization on the mechansim of LLM.\\n\\nOverall, this is a very interesting paper that delivers new insights and findings in the field of mechanism interpretability. One limitation is that this paper is very task-specific, it remains unclear whether other mathematical tasks are using similar approaches.\", \"rating\": \"8\", \"confidence\": \"4\"}"
]
} |
CAgBCSt8gL | Systematic Evaluation of LLM-as-a-Judge in LLM Alignment Tasks: Explainable Metrics and Diverse Prompt Templates | [
"Hui Wei",
"Shenghua He",
"Tian Xia",
"Fei Liu",
"Andy Wong",
"Jingyang Lin",
"Mei Han"
] | LLM-as-a-Judge has been widely applied to evaluate and compare different LLM alignmnet approaches (e.g., RLHF and DPO). However, concerns regarding its reliability have emerged, due to LLM judges’ biases and inconsistent decision-making. Previous research has developed evaluation frameworks to assess reliability of LLM judges and their alignment with human preferences. However, the employed evaluation metrics often lack adequate explainability and fail to address LLM internal inconsistency. Additionally, existing studies inadequately explore the impact of various prompt templates when applying LLM-as-a-Judge methods, leading to potentially inconsistent comparisons between different alignment algorithms. In this work, we systematically evaluate LLM-as-a-Judge on alignment tasks by defining more theoretically interpretable evaluation metrics and explicitly mitigating LLM internal inconsistency from reliability metrics. We develop an open-source framework to evaluate, compare, and visualize the reliability and alignment of LLM judges, which facilitates practitioners to choose LLM judges for alignment tasks. In the experiments, we examine effects of diverse prompt templates on LLM-judge reliability and also demonstrate our developed frame work by comparing various LLM judges on two common alignment datasets (i.e., TL;DR Summarization and HH-RLHF-Helpfulness). Our results indicate a significant impact of prompt templates on LLM judge performance, as well as a mediocre alignment level between the tested LLM judges and human evaluators. | [
"LLM-as-a-Judge",
"explainability",
"bias",
"prompt templates",
"LLM alignment tasks"
] | Accept | https://openreview.net/pdf?id=CAgBCSt8gL | https://openreview.net/forum?id=CAgBCSt8gL | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"tkAMJrKyhf",
"e42Lntmmpa",
"Gf4CBu0xXc",
"CZhMhMMpzl"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741099964295,
1740811933080,
1739799157124,
1740301720220
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission100/Reviewer_gu3c"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission100/Reviewer_wm1G"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission100/Reviewer_9ATn"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Great insights for real-world LLM-as-a-judge uses but clarifications needed on some important points\", \"review\": [\"Strengths:\", \"The authors introduced a set of reliability metrics of LLM-as-a-judge models with improved theoretical interpretability.\", \"The work can be useful in the real world as it demonstrated how users could mitigate some of the existing issues in LLM judges\\u2019 output in a principled way.\"], \"weaknesses\": [\"Line 176 should be \\u201c$y_r$ more frequently in ($y_r$, $y_c$)\\u201d\", \"It is unclear what \\u201cintrinsically length bias-mitigated\\u201d means in Finding 1 at line 232. Same for \\u201centangled with position bias\\u201d in Finding 2 at line 233. It\\u2019s also unclear how employing $A_\\\\text{both}$ for accuracy can help mitigate the influence of positional bias. The authors are encouraged to add in more context or examples in the main text to provide better intuition to help readers understand.\", \"The authors mentioned at line 385 that \\u201c$A_\\\\text{random}$ is less effective metric for assessing LLM judge performance compared to $A_\\\\text{both}$\\u201d, and also simply switched to using only $A_\\\\text{both}$ for evaluation in all the following sections. It leaves the readers wondering what the value of introducing $A_\\\\text{random}$ in the paper really is? The authors need to clarify.\", \"Visibility of Figure 2, Figure 3, and Figure 4 needs to be improved.\", \"Do the results generalize beyond GPT models? The results can be quite limiting as the authors only looked at models from the same family.\", \"It\\u2019d be nice if the authors can provide some empirical evidence and/or case studies to showcase the dynamic between the position bias, the length bias, and $A_\\\\text{both}$ as described in Finding 1 and Finding 2. I found that missing in the main text.\", \"One of the major contributions that the authors claimed is their open-source evaluation framework, but the link is not provided in the paper, which makes it hard to evaluate the quality of the actual framework.\"], \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Accept\", \"review\": \"Many alignment experiments, such as in scalable oversight, depend on the use of LLM judges to simulate human judges. However there is a breadth of literature (which they cite) demonstrating that LLM judges have systematic biases such as position bias, length bias and self-bias that are different from human judges.\\n\\nThe paper attempts to give some solid theoretical grounding for formalizing and evaluating such biases, and provides a systematic evaluation framework for computing the accuracy and position and length bias of different LLM judges. \\n\\nI think this paper gives a starting point to a very important field of study, and will likely be quite influential for future works involving LLM judges, such as in scalable oversight.\", \"some_comments\": \"(1) There should probably be some discussion on the natural or obvious pipelines that mitigate these biases, e.g. prompting the LLM with both orderings/a range of lengths and taking the average. You could also imagine a setup like AlphaGo: training an LLM on the output of such a pipeline to get it to \\u201cnaturally\\u201d learn to be unbiased. Also with regards to general prompt biases, maybe dspy is something relevant?\\n\\n(2) A quite related work is \\u201cconsistency checks for language model forecasters\\u201d: https://openreview.net/forum?id=r5IXBlTCGc which discovers that LLM judges change their answers based on logical transformations of the questions (e.g. P \\u2192 \\u00acP).\\n\\n(3) Are accuracy, bias etc. defined only relative to human behaviour? i.e. an LLM is accurate if it gives the same answer as a human? This might make sense for the case of using LLM judges to simulate humans, but I think some discussion of the case where we are allowed to say the human is wrong, or the human would change their mind after encountering some new information, is valuable.\\n\\n(4) Another area where the results of this paper may be valuable is: \\u201cRedesigning Information Markets in the Era of Language Models\\u201d https://openreview.net/forum?id=Zq9Dfj4nBo where you need language models to simulate their \\u201chuman principals\\u201c (humans who want the language model to make purchases on their behalf)\\n\\nOverall, this is a clear accept. Perhaps the experiments are a bit small/weak, but this is forgivable for a workshop, and in any case the theoretical contribution is valuable enough.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Accuracy issues due to certain assumptions\", \"review\": \"**Strengths:**\\n1. The paper tackles the reliability of LLM-as-a-Judge methods, particularly for evaluating alignment techniques like RLHF and DPO.\\n2. The authors propose interpretable metrics for position bias and length bias, unifying them under an accuracy-based framework. The explicit modeling of self-inconsistency as flipping noise is novel.\\n3. Experiments highlight the significant influence of prompt templates on LLM judge performance, a relatively underexplored area.\\n\\n**Weaknesses:**\\n1. The paper models LLM self-inconsistency as flipping noise with probability q, assuming q is independent of the true label X. This assumption may not hold, as LLMs might be more likely to flip decisions when uncertain, which often occurs when X = 0 (incorrect decision). The de-noising formula, e.g., \\n\\\\begin{equation}\\np[X = 1 \\\\mid (y_c, y_r)] = \\\\frac{p[Z = 1 \\\\mid (y_c, y_r)] - q_{cr}}{1 - 2 \\\\cdot q_{cr}}\\n\\\\end{equation}\\nrelies on this assumption. If $q \\\\mid X = 0 \\\\neq q \\\\mid X = 1$, the de-noised probabilities may be inaccurate.\\n\\n2. The definition of position bias PB is tied to accuracy, as $\\\\text{Acc}\\\\_{\\\\text{both}} = 1$ requires consistency across both response orders, implying zero position bias for that sample. The observed negative correlation between $\\\\text{Acc}\\\\_{\\\\text{both}}$ and $|\\\\text{PB}|$ may be tautological, as judges with high $\\\\text{Acc}\\\\_{\\\\text{both}}$ must have low PB by definition. \\n\\n3. Similarly, the claim that PB is \\\"intrinsically length bias-mitigated\\\" is questionable. If the LLM has a length bias, swapping positions could interact with length preferences, especially if the LLM favors longer responses in the first position.\", \"rating\": \"5\", \"confidence\": \"2\"}"
]
} |
C7Mwox3C1u | Endive: A Cross-Dialect Benchmark for Fairness and Performance in Large Language Models | [
"Abhay Gupta",
"Jacob Cheung",
"Philip Meng",
"Shayan Sayyed",
"Austen Liao",
"Kevin Zhu",
"Sean O'Brien"
] | The diversity of human language, shaped by social, cultural, and regional influences, presents significant challenges for natural language processing (NLP) systems. Existing benchmarks often overlook intra-language variations, leaving speakers of non-standard dialects underserved. To address this gap, we introduce EnDive (English Diversity), a benchmark that evaluates five widely-used large language models (LLMs) across tasks in language understanding, algorithmic reasoning, mathematics, and logic. Our framework translates Standard American English datasets into five underrepresented dialects using few-shot prompting with verified examples from native speakers, and compares these translations against rule-based methods via fluency assessments, preference tests, and semantic similarity metrics. Human evaluations confirm high translation quality, with average scores of at least 6.02/7 for faithfulness, fluency, and formality. By filtering out near-identical translations, we create a challenging dataset that reveals significant performance disparities—models consistently underperform on dialectal inputs compared to Standard American English. EnDive thus advances dialect-aware NLP by uncovering model biases and promoting more equitable language technologies. | [
"LLMs",
"NLU",
"dialectal variations",
"dialectal bias",
"LLM bias",
"AAVE",
"SAE",
"VALUE benchmark",
"MultiVALUE benchmark",
"GLUE tasks",
"SuperGLUE tasks",
"GPT-4o",
"DeepSeek v3",
"biases",
"inclusive NLP",
"ReDIAL benchmark"
] | Accept | https://openreview.net/pdf?id=C7Mwox3C1u | https://openreview.net/forum?id=C7Mwox3C1u | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"mmNcR8hL4r",
"jZvnh7F42f",
"gLFjPdK4m7",
"YN03qvlglA"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740914932347,
1740094607976,
1740456784123,
1741099657352
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission116/Reviewer_mgNd"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission116/Reviewer_YLh5"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission116/Reviewer_oQ1v"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"The paper makes a contribution by providing a detailed and large-scale benchmark for non-standard English dialects, nicely backed up with quality tests for the few-shot translation approach.\", \"review\": [\"## Summary\", \"The paper proposes a benchmark for evaluating model performance on non-standard English dialects across tasks in language understanding, algorithmic reasoning, mathematics, and logic. The authors introduce a method for translating questions from Standard American English to dialects using few-shot prompting with three verified examples informed by eWAVE. Five different LLMs are tested on this benchmark, often showing lower performance, demonstrating a drop in capability when models are prompted in non-standard English dialects.\", \"## Strengths\", \"This benchmark includes five different non-standard dialects, unlike related work that often focuses on a single dialect. The translation methodology appears easily scalable to more dialects.\", \"The evaluated LLMs are recent and widely used models.\", \"The authors conducted multiple experiments to assess translation quality, including BLEU score filtering, lexical diversity, BARTScore, fluency evaluation, and human ratings.\", \"The benchmark covers a wide variety of tasks in both zero-shot and CoT formats.\", \"The authors clearly outline the limitations of their proposed benchmark.\", \"## Weaknesses/Unclear Points\", \"I could not find a full example of the translated data. Based on the translation prompt, it seems the formality of examples may decrease after translation, making performance comparisons to the original questions unfair, as the original questions are often in a very formal style. Although human feedback rates formality highly, I would still appreciate seeing full examples after translation.\", \"Is there a reason why fluency is not tested for Multi-VALUE?\", \"On line 176, the Appendix number is missing.\", \"Overall, the paper makes a valuable contribution by providing a detailed and large-scale benchmark for non-standard English dialects.\"], \"rating\": \"7\", \"confidence\": \"2\"}",
"{\"title\": \"This paper introduces ENDIVE, a benchmark designed to evaluate the fairness and performance of large language models (LLMs) across underrepresented English dialects. The study is well-structured, with clear methodology and comprehensive experiments using few-shot prompting and human validation. The results demonstrate that LLMs consistently underperform on non-standard dialects compared to Standard American English (SAE), highlighting significant biases in current language technologies. The paper is original, relevant, and contributes valuable insights to the field. However, it is limited by a narrow task coverage and evaluation of only five LLMs. Overall, the paper is a strong candidate for acceptance, with potential for further exploration in future work.\", \"review\": \"Quality\\nThe paper is well-structured and presents a comprehensive evaluation of large language models (LLMs) across underrepresented English dialects using the ENDIVE benchmark. The methodology is robust, combining few-shot prompting with human validation to ensure linguistic authenticity. The experiments are well-designed, and the results are clearly presented, with detailed tables and analysis. The paper also acknowledges its limitations and suggests directions for future work, which adds to its credibility.\\n\\nClarity\\nThe paper is generally clear and well-written. The abstract provides a concise overview of the study, and the introduction effectively sets the stage for the research. The methodology section is detailed and explains the translation strategies and evaluation metrics clearly. However, some parts of the paper could benefit from more explicit explanations, particularly in the results section, where the implications of the findings could be discussed in greater depth.\\n\\nOriginality\", \"the_paper_addresses_a_significant_and_timely_issue_in_the_field_of_nlp\": \"the underperformance of LLMs on non-standard dialects. The introduction of the ENDIVE benchmark is novel and contributes to the ongoing discourse on fairness and inclusivity in language technologies. The paper builds on existing work but offers new insights into how dialectal variations can be systematically evaluated.\\n\\nSignificance\\nThe findings of this paper are highly relevant to the field of NLP, particularly in the context of fairness and inclusivity. The demonstration that LLMs consistently underperform on non-standard dialects compared to Standard American English (SAE) is a valuable contribution. This work has the potential to influence future research and practical applications in the development of more equitable language technologies.\\n\\nPros\", \"novel_benchmark\": \"The introduction of the ENDIVE benchmark is innovative and addresses a critical gap in NLP evaluation.\", \"comprehensive_evaluation\": \"The paper presents a thorough evaluation of LLMs across multiple dialects and tasks.\", \"clear_results\": \"The results are well-presented and supported by detailed tables and analysis.\", \"future_work\": \"The paper acknowledges its limitations and suggests valuable directions for future research.\\n\\nCons\", \"limited_task_coverage\": \"The benchmark focuses on 12 tasks, which may not cover all aspects of dialectal variation.\", \"model_coverage\": \"The study evaluates only five LLMs, which may not be representative of the rapidly evolving field.\", \"depth_of_discussion\": \"The implications of the findings could be discussed in greater depth, particularly in relation to existing literature and potential real-world applications.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Timely work, sound methodology\", \"review\": \"Given the improvement in LLM performance across all languages, it is a good time to also focus on dialects -- performance on which might be affected by the model's knowledge in the corresponding non-dialect language. This work investigates this, focusing on five English dialects to profile the performance of language models on them. The methods are clearly explained, the research is timely.\", \"strengths\": [\"Inclusive and generally exhaustive. 5 English dialects. 4 task types. 12 datasets. Zero shot and CoT inference. Standard stuff. Could always be more exhaustive, but this is good enough.\", \"Proper automatic translation and filtering methods, which seem correct to me, with various scores to quality-check them: diversity scores, fluency scores, translation scores.\", \"Representative selection of models for today's state of art, even if it moves very quickly.\"], \"possible_improvements\": [\"More variety in dialects: English is already a high resource language, and most of the dialects are perhaps too similar to generalize the results to LLMs' performance in dialect\", \"Most models here have degraded performance on the dialects, the beginnings of an exploratory analysis would be great: Why does this happen? Are there certain words or phrases in certain dialects that cause this? Do these words/phrases share commonalities across dialects? Says in pg 7 that the models face challenges \\\"in coreference resolution and textual comprehension\\\" for the dialects, but no further explanation given, so perhaps the work would benefit a better qualitative analysis.\", \"The work doesn't explicitly attempt to be sociological, but it might be a good idea to provide markers that lead to sociological research and to scratch the surface of why dialects suffer. But again, very timely, meaningful work. A little top-heavy on the experimental setup and filtering, could use more qualitative analyses to balance it out.\"], \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
BD70y13DH1 | WebGauntlet: Measuring Instruction Following and Robustness for Web Agents | [] | Recent advances in language model (LM) agents and tool calling have enabled autonomous, iterative systems to emulate digital behavior in a variety of environments. In order to better understand the instruction following limitations of LM agents, we introduce WebGauntlet, a benchmark that stress tests the robustness of web agents in realistic online environments. Our environment replicates online e-commerce settings for agents to traverse and perform simple tasks for users. Our threat model concretizes dozens of environment-side attacks and finds that LM agents struggle to traverse past simple adversarial content, where our strongest threats average an attack success rate (ASR) of 98.92%. We analyze trajectories to explore the failures of web agents and better understand vision-language model (VLM) limitations. WebGauntlet supports the study agent safety, demonstrating the gaps in performance between a spectrum of adversarial and safe environments. | [
"Language Agents",
"Benchmarks",
"Web Agents",
"AI Safety",
"Robustness"
] | Reject | https://openreview.net/pdf?id=BD70y13DH1 | https://openreview.net/forum?id=BD70y13DH1 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"rwiS0CwaSU",
"pWqrKg88QQ",
"dMIl3EjAbN",
"FHjAIhrNBc"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740813860998,
1740898079354,
1740810678553,
1741084671138
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission56/Reviewer_Ruve"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission56/Reviewer_kUok"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission56/Reviewer_dnb9"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Great engineering efforts with interesting robustness findings for Web Agents under environment-side attacks, but with major presentation issues and limited scope.\", \"review\": \"Pros:\\n\\n1. The work demonstrates substantial engineering effort in building and hosting a reproducible sandbox environment with realistic adversarial web attacks.\\n2. Interesting results showing state-of-the-art agents (based on GPT-4 or Claude) are vulnerable to webpage manipulations.\\n3. Dividing attacks into Benign, Human, and Agent categories provides structured insights into specific failure modes\", \"cons\": \"1. presenting style \\n 1. Noticeable presentation issues in (e.g., Fig. 7(Blank figure), Appendix C (out of bounds), etc.)\\n 2. Overall writing resembles a technical report rather than a formal scientific publication (e.g., missing rigorous definitions or more formalized exposition).\\n2. Limited domain of only e-commerce\\n 1. The study focuses exclusively on an e-commerce scenario, limiting broader generalization.\\n 2. Most evaluations appear restricted to single-site settings, leaving multi-domain or multi-site unexplored\\n3. Single Agentic framework (WebVoyager) tested, would be a plus to test on more recent open sourced framework such as OpenHands(https://arxiv.org/abs/2407.16741), and specialized web agent such as ScribeAgent (https://arxiv.org/abs/2411.15004)\\n4. Different frameworks leverage distinct observation formats (screenshots vs. DOM trees). Expanding tests to agents with other observation strategies would help validate whether findings hold universally.\\n\\nOverall, the paper requires some revision, although it develops and tests with several attacks, which is valuable, it has a really limited scope and systems tested. Additionally, the paper has several presenting issues that should be modified.\", \"rating\": \"5\", \"confidence\": \"5\"}",
"{\"title\": \"Review for the WEBGAUNTLET Paper (Measuring Instruction-Following for Web AI Agents)\", \"review\": \"# Summary\\nThis paper introduces WEBGAUNTLET, a novel benchmark that assesses the safety and resilience of language model (LM) agents in practical online e-commerce environments. The environment simulates typical e-commerce websites with integrated adversarial content, allowing researchers to analyze the degree to which agents follow instructions and are resistant to several online attacks. The benchmark integrates a diverse threat model with different types of attacks, such as redirections, data scraping, and system notifications, placed in many different positions on the website.\\n\\nAuthors' experiments with WEBGAUNTLET demonstrate that current LM agents struggle to navigate through even simple adversarial material, with highly effective attacks. Comparing this with the human baseline, the human baseline does not have a problem completing the tasks, highlighting the vast gap between human and agent performance in these environments. With the release of the WEBGAUNTLET environment, authors wish to encourage more work on increasing the safety and robustness of web agents.\\n\\n# Pros and Cons\\n## Pros\\n1. **Results:** Some of the results seem interesting. However, a clear attack model is missing, making it hard to interpret the impact of the paper.\\n2. **Motivation:** The overall motivation behind the paper is well-justified. However, there is a clear lack of motivation (or lack of expressed motivation) behind many design choices made for the construction and evaluation of the benchmark.\\n\\n## Cons\\n1. **Quality and Completeness:** The paper seems to be written in a rushed manner with some figures missing (line 334) and many concepts and definitions are not well-explained. An arbitrary decision-making process seems to have been involved for the different parts of the benchmark generation, including for using \\\"agent-specific\\\" instructions for the attacks (or jailbreaks if that's what they mean), the overall categories of attacks (Benign, Human, Agent-Specific), and the \\\"operational modes\\\" (single-mode vs multi-mode). There seems to be a major lack of clarity for these concepts in the paper. There are also lots of grammatical and dictation errors throughout the paper. In terms of the completeness, there seems to be terms (such as \\\"the randomization algorithm\\\") used in specific contexts before which they are not well-explained at all.\\n2. **Scope:** The scope of the benchmark seems too limited to be practically useful for real-world evaluations of the robustness of web agents. It only consists of a single simulation direction (online e-commerce simulations), making it bound to a very specific context. And even within this context, the structure of the simulation platform designed by the authors seems to be not very flexible. It is particularly unclear what safety/security properties are being evaluated with each of the tests/samples and the evaluation directions seem too broad. \\n3. **Data Generation:** It is very unclear how the data is generated for the benchmark and how it could be generated in a scaled manner to further generalize this approach to the generation of more complex or domain-specific benchmarks.\\n4. **Novelty:** I see that defining this new benchmark with \\\"simple\\\" tasks that the authors claim the agents fail on could be considered fairly novel, but I think this idea has to be much better developed into actually building a robust benchmark that can useful for the community evaluating the robustness of web agents in a reliable manner. The over-generalizations and lack of systematic design in the benchmark generation process make it hard to see the reliability aspect.\\n\\n# Minor/Major Errors\\n1. \\\"agent-specific\\\" should be used instead of \\\"agent specific\\\" on line 202.\\n2. \\\"deployed\\\" should probably be used instead of \\\"deploy\\\" on line 232? This part of the paper, i.e. the \\\"Operational Modes\\\" section, is also very unclear.\\n3. \\\"successfully\\\" should be used instead of \\\"successful\\\" on line 255.\\n4. Many grammatical errors on line 257.\\n5. **Figure missing on line 334.**\\n\\n\\n# Questions\\n1. Shouldn't there be citations for the defined metrics? Do the authors claim that the metrics are completely novel?\", \"rating\": \"2\", \"confidence\": \"4\"}",
"{\"title\": \"Potentially Useful Web LM Agent Benchmark with Limited Evaluation and Presentation Issues\", \"review\": \"Summary:\\nThe authors propose a benchmark for evaluating LM agents in adversarial web e-commerce tasks. They construct an e-commerce site with a scrollable product grid, detailed product pages, a shopping cart, and a product search engine. The benchmark introduces three threat patterns\\u2014benign, human-specific, and agent-specific\\u2014which manifest as pop-ups, banners, ad slots, etc. The authors evaluate two LMs, GPT-4o and Claude-3.5 Sonnet, finding both susceptible to different threat patterns, with attack success rates (ASR) reaching 98.92%.\", \"originality_and_significance\": \"The work builds on web LM agent research and introduces an adversarial evaluation benchmark for web e-commerce tasks, making a timely contribution to the safe deployment of web LM agents.\", \"pros\": [\"Evaluating LM-based web agents in adversarial settings is timely and important\", \"The benchmark effectively highlights vulnerabilities in VLM web agents, demonstrating high ASR (up to 98.92%)\", \"A variety of attack types and placements are considered\"], \"cons\": [\"The paper considers a limited model pool with only gpt-4o and claude-3.5-sonnet\", \"No experiments exploring the impact of prompting techniques or safety-prompting on agent performance, potentially undermining LM agent performance\", \"Statistics on the test cases are lacking, e.g. how many examples are evaluated for each threat model?\", \"The claim made in subsection \\\"Agents quickly learn how to get by benign attacks\\\" in section 5 is unclear and lacks references to supporting tables or results\", \"Presentation issues: uses inconsistent terminology (e.g., \\\"Normal\\\" used instead of \\\"human-specific\\\" in Table 3) and broken figure reference in Section 5; overall lacklustre presentation\"], \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
Ak1dZpx3P3 | Rethinking LLM Bias Probing Using Lessons from the Social Sciences | [
"Kirsten Morehouse",
"Siddharth Swaroop",
"Weiwei Pan"
] | The proliferation of LLM bias probes introduces three significant challenges: (1) we lack principled criteria for choosing appropriate probes, (2) we lack a system for reconciling conflicting results across probes, and (3) we lack formal frameworks for reasoning about when (and why) probe results will generalize to real user behavior. We address these challenges by systematizing LLM social bias probing using actionable insights from social sciences. We then introduce EcoLevels – a framework that helps (a) determine appropriate bias probes, (b) reconcile conflicting findings across probes, and (c) generate predictions about bias generalization. Overall, we ground our analysis in social science research because many LLM probes are direct applications of human probes, and these fields have faced similar challenges when studying social bias in humans. Based on our work, we argue that the next frontier of LLM bias probing can (and should) benefit from decades of social science research. | [
"bias probing",
"LLMs",
"EcoLevels",
"interdisciplinary",
"social bias",
"psychological theory"
] | Accept | https://openreview.net/pdf?id=Ak1dZpx3P3 | https://openreview.net/forum?id=Ak1dZpx3P3 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"tgjnXpXjlR",
"RSEcqlGN03",
"ICrrqdARUn",
"58mDyianGf"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1741100023463,
1740814679087,
1740869985846,
1740940449876
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission86/Reviewer_U3y8"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission86/Reviewer_46ir"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission86/Reviewer_AqR8"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review: \\u201cRethinking LLM Bias Probing Using Lessons from the Social Sciences\\u201d\", \"review\": \"**Review**\\n\\n### Summary\\nThis paper explores the growing landscape of social bias probes in Large Language Models (LLMs), drawing on insights from social sciences, especially from research on implicit and explicit biases in humans. The authors highlight three central challenges in LLM bias research: (1) how to select the most appropriate bias probes, (2) how to reconcile conflicting findings across probes, and (3) how to determine when bias probe results generalize to real user behavior. As a remedy, they introduce **EcoLevels**, a taxonomy that sorts bias probes into three levels (associations, task-dependent decisions, and naturalistic output) and leverages the concept of _ecological validity_ to guide probe selection. The paper concludes with recommendations for systematically detecting and interpreting LLM bias using social-scientific frameworks.\\n\\n### Quality\\nThe paper is well-researched and uses a structured approach, combining a literature review of existing bias probes with theoretical insights from experimental psychology. The arguments about how measurement decisions can radically affect whether and how bias is detected are convincing. The proposed taxonomy is presented cohesively, and the manuscript includes illustrative examples of how one might use EcoLevels to address practical questions (e.g., detecting gender-occupation bias).\\n\\n### Clarity\\nThe writing is generally clear, with accessible explanations of complex social science constructs (e.g., implicit vs. explicit measures in psychology) and how they map to LLM biases. The motivation for a new taxonomy is well-articulated, and the paper effectively distinguishes EcoLevels from existing categorizations (e.g., intrinsic/extrinsic bias). The figures and tables further help in conveying the main points, and the paper includes a helpful glossary of terms.\\n\\n### Originality\\nWhile there have been various recent efforts to categorize or compare LLM bias probes, this paper\\u2019s core contribution is novel: grounding bias detection and measurement in social science theory and formalizing it through an \\u201cecological validity\\u201d lens. The analogy to direct and indirect measurement in humans (self-report vs. reaction-time tasks) has been discussed in prior work, but the paper\\u2019s specific solution, EcoLevels, is distinctive and provides a fresh viewpoint on systematically organizing different classes of prompts.\\n\\n### Significance\\nThis work has the potential to steer the field toward more principled and theory-driven methodologies for bias probing. Given the proliferation of ad-hoc bias benchmarks, researchers and practitioners alike could benefit from a clearer framework that helps them choose probes aligned with the specific construct and use-case. Its emphasis on boundary conditions and reconciling conflicting results is particularly important, as it encourages building more precise theories about where and why biases in LLMs emerge or disappear.\\n\\n---\\n\\n## Pros and Cons\\n\\n**Pros**\\n- **Novel organizational framework (EcoLevels)** for mapping LLM bias probes to different abstraction levels.\\n- Strong alignment with established social-science concepts (implicit/explicit attitudes, social desirability, etc.).\\n- Offers **practical guidance**: how to choose probes, how to interpret conflicting results, how to improve ecological validity.\\n- Encourages **narrowed-down research questions**, which can lead to more replicable and interpretable findings.\\n- Provides **testable hypotheses** (e.g., that association-level prompts should correlate more closely with underlying corpus statistics).\\n\\n**Cons**\\n- Some **borderline cases** between association-level and task-dependent prompts, which might require further clarification (the authors do note this).\\n- The application examples mostly focus on **gender-occupation bias**; examples from other social groups or more emergent LLM tasks might strengthen the generalizability.\\n- **Implementation details** of how to operationalize ecological validity in real deployments may require additional depth or guidance (e.g., how to rigorously measure it, especially for newly emerging tasks).\\n\\n---\\n\\n### Recommendation\\nI recommend **acceptance**. The paper is timely, provides a clear conceptual framework to unify disparate threads in bias detection, and offers a novel lens (EcoLevels) that is both theoretically grounded and practically applicable. The discussion of boundary conditions\\u2014viewing conflicting results not simply as \\u201cmixed evidence\\u201d but as an opportunity to refine our understanding\\u2014especially stands out as a valuable perspective for researchers in this space.\\n\\n### Minor Suggestions\\n- While the paper cites relevant studies comparing multiple bias probes, adding a concise **empirical demonstration** (a small-scale experiment) might reinforce the taxonomy\\u2019s utility in reconciling contradictory results.\\n- The authors could incorporate short illustrative **pseudo-code** or demonstration prompts for each level, which would aid reproducibility and clarity.\\n- A formulaic approach to \\u201cscoring\\u201d ecological validity (e.g., \\n\\\\[\\n\\\\text{EcoScore} = \\\\alpha \\\\cdot \\\\text{DomainMatch} + \\\\beta \\\\cdot \\\\text{TaskRealism} + \\\\gamma \\\\cdot \\\\text{PopulationOverlap} \\n\\\\]\\n) might help readers see how weighting different aspects of real-world alignment could yield a final decision metric for probe selection.\\n\\nOverall, the manuscript makes a strong case that future LLM bias research should be systematic, construct-valid, and grounded in social-science insights. The EcoLevels framework could serve as a foundational reference for researchers looking to build robust, interpretable, and generalizable bias detection methodologies.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"This paper makes a valuable contribution by systematically applying social science insights to LLM bias probing, but its practical applicability and validation remain limited\", \"review\": [\"## Strengths\", \"The introduction of EcoLevels provides a structured and theory-driven way to categorize and compare LLM bias probes\", \"The paper is well-grounded in social psychology research, drawing parallels between human and LLM biases\", \"The critique of current bias probe taxonomies shows that existing categorizations lack precision and practical usability\", \"The authors illustrate how researchers can systematically choose appropriate bias probes by applying EcoLevels to gender-occupation bias\", \"## Weaknesses\", \"The EcoLevels framework lacks rigorous quantitative experiments or benchmarks to prove its effectiveness. Testing EcoLevels across multiple bias probes would have strengthened its credibility.\", \"The paper assumes that LLM biases function similarly to human biases, but this is not always the case\", \"LLM biases often differ across architectures, datasets, and training methodologies. The paper does not discuss well enough how EcoLevels applies across different model families.\", \"It is not clear on how one can integrate the EcoLevels into standard LLM evaluation pipelines\"], \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"Review\", \"review\": \"The paper organizes the bias probing literature for ML models , presents challenges and confusion arising from the current state of bias probes and proposes EcoLevels -- a framework to systematically choose probes, make sense of the results and predict impact/generalization.\\n\\nI enjoyed reading this paper. The survey and organization of existing literature, challenges is very enlightening.\", \"suggestions\": \"The section on EcoLevels feels weak. By the time reader reaches this section, they have already gained enough knowledge about the area and due to this EcoLevels feels obvious and not novel/significant. Also the real estate given to this section which is supposed to be the main contribution of the paper is pretty low. You might want to reorganize your writing or/and make this section more substantial through additional contributions.\", \"typo\": \"L148-149 \\\"and \\u201chome\\u201d) share a response key\\\" -- home should be career?\", \"rating\": \"8\", \"confidence\": \"5\"}"
]
} |
AZHBPzPCS5 | Privately Learning from Graphs with Applications in Fine-tuning Large Pretrained Models | [
"Haoteng Yin",
"Rongzhe Wei",
"Eli Chien",
"Pan Li"
] | Graphs offer unique insights into relationships and interactions between entities, complementing data modalities like text, images, and videos. By incorporating relational information from graph data, AI models can extend their capabilities beyond traditional tasks. However, relational data in sensitive domains such as finance and healthcare often contain private information, making privacy preservation crucial. Existing privacy-preserving methods, such as DP-SGD, which rely on gradient decoupling assumptions, are not well-suited for relational learning due to the inherent dependencies between coupled training samples. To address this challenge, we propose a privacy-preserving relational learning pipeline that decouples dependencies in sampled relations during training, ensuring differential privacy through a tailored application of DP-SGD. We apply this method to fine-tune large language models (LLMs) on sensitive graph data, and tackle the associated computational complexities. Our approach is evaluated on LLMs of varying sizes (e.g., BERT, Llama2) using real-world relational data from four text-attributed graphs. The results demonstrate significant improvements in relational learning tasks, all while maintaining robust privacy guarantees during training. Additionally, we explore the trade-offs between privacy, utility, and computational efficiency, offering insights into the practical deployment of our approach. Code is available at https://github.com/Graph-COM/PvGaLM. | [
"differential privacy",
"relational learning",
"private learning",
"language models",
"fine-tuning"
] | Accept | https://openreview.net/pdf?id=AZHBPzPCS5 | https://openreview.net/forum?id=AZHBPzPCS5 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"b9T6I9bZEg",
"a365s7zty9",
"ZYMQZE3U37"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740910016730,
1741083056877,
1740915770592
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission30/Reviewer_MUU5"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission30/Reviewer_XiyP"
]
],
"structured_content_str": [
"{\"title\": \"The paper identifies the problem of applying differential privacy in relation learning especially DP-SGD. Provide a novel privacy-preserving pipeline for fine-tuning LLM as proof-of-concept.\", \"review\": \"Strengths:\\n1. The paper is well-written and easy to follow\\n2. The problem statement of applying differential privacy in graphs has broad impact one of them being fine-tuning LLM in relational learning\", \"weakness_and_errors\": \"1. Typo error on line 255 d,p are the input and output dimensions not d,q\\n2. The novelty of the paper can be improved upon negative sampling and efficient per-tuple gradient estimation\", \"questions\": \"1. Experiments using GNN based methods are missing\\n2. Experiments on large scale homophilic and heterophilic graph datasets\", \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Decent work on differentially private relational learning\", \"review\": \"### Summary\\n\\nThis paper addresses the problem of differentially private learning for relational data. Since DP-SGD relies on gradient decoupling, it is not directly applicable, and the problem addressed here is relevant. The authors approach this problem by introducing a method to decouple gradients by sampling relations.\\n\\n\\n### Strengths\\n\\n1. This paper addresses a relevant problem, since differentially private learning poses several challenges in relational learning.\\n2. Nicely written. The organization of the paper is generally good and the flow is easy to follow.\\n3. Technical quality of this work is decent.\\n4. The figure 1 is quite instructive and helps understand the approach better.\\n\\n### Weaknesses\\n\\n1. The formulation of this work is not very novel. Negative sampling methods have already been explored and this paper combines it with DPSGD to show its use in privacy preservation.\\n2. A rather interesting point from Table 8 is that the performance of the model with $\\\\varepsilon=4$ outperforms the non-private model. This result merits discussion but is not discussed in the paper.\", \"rating\": \"6\", \"confidence\": \"3\"}"
]
} |
ATzaR0MYeq | Justified Trust in AI Fairness Assessment using Existing Metadata Entities | [
"Alpay Sabuncuoglu",
"carsten maple"
] | AI is becoming increasingly complex, opaque and connected to systems without human oversight. As such, ensuring trust in these systems has become challenging, yet vital. Trust is a multifaceted concept which varies over time and context, and to support users in making decisions on what to trust, work has been recently developed in the trustworthiness of systems. This includes examination of the security, privacy, safety and fairness of a system. In this work, we explore the fairness of AI systems. While mechanisms, such as formal verification, aim to guarantee properties such as fairness, their application in large-scale applications is rare due to cost and complexity issues. A major approach that is deployed in place of formal methods involves providing claims regarding the fairness of a system, with supporting evidence, to elicit justified trust in the system. Through continuous monitoring and transparent reporting of existing metadata with model experiment logs, organisations can provide reliable evidence for claims. This paper provides details of a new approach for evidence-based trust. We share our findings from a workshop with industry professionals and provide a practical example of how these concepts can be applied in a credit risk analysis system. | [
"justified trust",
"fairness",
"transparency artefacts",
"machine learning metadata",
"fairness assessment"
] | Accept | https://openreview.net/pdf?id=ATzaR0MYeq | https://openreview.net/forum?id=ATzaR0MYeq | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"oipDFfK6RY",
"dJjLARCsPn",
"Z84NUceRSY",
"JYBi7SFLv5"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740877082002,
1740881541464,
1740858207251,
1741100065111
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission74/Reviewer_aLxr"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission74/Reviewer_WBy7"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission74/Reviewer_KSLP"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Justified Trust in AI Fairness Assessment Using Existing Metadata Entities\", \"review\": \"This paper proposes leveraging model logs and transparency artifacts (data cards, model cards, fairness logs) to build evidence-based trust in AI fairness claims. The methodology is demonstrated through a credit risk assessment case study and informed by industry practitioner feedback.\", \"strengths\": [\"Addresses practical, evidence-based trust rather than abstract fairness principles with emphasis on continuous monitoring\", \"Structured approach using existing metadata artifacts aligned with industry standards and regulations\", \"Incorporates real-world practitioner feedback from industry workshops on implementation challenges\", \"Concrete credit risk case study demonstrating regulatory alignment (UK Equality Act, FCA rules)\", \"Useful discussion on automating fairness monitoring through metadata logging\"], \"weaknesses_and_suggestions\": [\"Lacks empirical validation of effectiveness beyond documentation practices\", \"Limited case study scope focused only on credit risk assessment\", \"Insufficient clarity on how this approach complements or replaces active fairness interventions\", \"Workshop findings presented without systematic analysis of practitioner feedback\", \"Does not critically examine effectiveness of existing regulations or address gaps\", \"Limited novelty beyond organizing existing best practices\", \"Unclear connection between metadata tracking and actionable fairness improvements\", \"Ambiguous balance between automation and necessary human oversight\", \"I rate this paper 6 because it is a practical approach to AI fairness monitoring aligned with industry practices, but requires additional empirical validation and broader application testing to demonstrate effectiveness beyond documentation.\"], \"rating\": \"6\", \"confidence\": \"2\"}",
"{\"title\": \"Official Review\", \"review\": \"## Summary\\nThe paper proposes a standardized format for reporting information about an ML system's fairness. This is done by augmenting currently used meta-data (e.g. the model card and the data card) with more explicit fairness data, as well as a new log for experiments and results around fairness. The paper also demonstrates an application of this format of meta-data in for a credit risk analysis application, and proposes some simple heuristic based checks for ensuring fairness from the parsed meta-data files.\\n\\n## Strengths\\n* The paper addresses an important issue for ensuring trust in ML models.\\n* The core of the paper's proposal can be added onto existing meta-data files (i.e. data, model cards) which are already released in ML workflows\\n* The case-study clarifies the contributions and the applications of the proposed system.\\n\\n## Weaknesses\\nI would like to preface this section by stating that I do not have enough experience as an industrial ML practitioner, or an HCI/fairness researcher to critique this paper authoritatively.\\n* The paper does not seem to add much over and above existing systems around model releases in my opinion. The paper also does not clarify what incentive corporations would have to release additional meta-data around fairness (aside from the case-study of EU regulations)\\n* Some of the proposals are a bit impractical for larger ML systems (particularly LLMs), where training data is not revealed (or audited properly), and model cards do not contain much information about the training procedure either. An idealized case study in this setting could help guide future model releases greatly.\", \"rating\": \"6\", \"confidence\": \"2\"}",
"{\"title\": \"Review\", \"review\": \"The paper tackles an important problem, and justifies their framework through a real use case. This paper seems like very relevant to the BuildingTrust workshop.\", \"some_relevant_papers_to_cite_regarding_the_definition_of_trust\": \"1. John D Lee and Katrina A See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human factors, 46.\\n2. Alon Jacovi, Ana Marasovic, Tim Miller, and Yoav Goldberg. 2021. Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI. In ACM Conference on Fairness, Accountability, and Transparency (FAccT).\", \"rating\": \"8\", \"confidence\": \"2\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
9yUvxPhvQX | Strengthening Robustness to Adversarial Prompts: The Role of Multi-Agent Conversations in Large Language Models | [] | While Large language models have shown impressive capabilities in problem-solving, understanding, and reasoning \citep{Touvron2023, Du2023}, yet remain susceptible to sophisticated adversarial prompts that can manipulate models to generate harmful outputs \citep{Zou2023, Wei2023}. Current defense mechanisms, such as self-refinement and safety guardrails \citep{Korbak2023, Robey2023}, have shown limited effectiveness against these attacks. Building upon the multi-agent debate framework \citep{Chern2024}, our research demonstrates how extended debates among diverse debaters enhance model resilience \citep{Chan2023}. Using multiple attack techniques, we assess toxicity and attack success across varying debaters and debate lengths \citep{Ganguli2022, Perez2022}. Our results demonstrate that cross-provider debates with extended interaction periods achieve significantly lower toxicity scores than single-provider systems. These findings advance our understanding of collaborative defense mechanisms in language models \citep{Cohen2023}. | [
"large language models",
"adversarial prompts",
"extended debates",
"multi-agent framework",
"toxicity reduction",
"cross-provider",
"defense mechanisms",
"model resilience"
] | Reject | https://openreview.net/pdf?id=9yUvxPhvQX | https://openreview.net/forum?id=9yUvxPhvQX | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"OyR8NONpNG",
"LEeO3qrPtu",
"BupQp3M2vh",
"8qBAotOUEC"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740049125393,
1740890974703,
1741099551790,
1740109992260
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission140/Reviewer_V5yh"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission140/Reviewer_TFYW"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission140/Reviewer_T6aD"
]
],
"structured_content_str": [
"{\"title\": \"Investigation of cross-provider LLM debates to improve safety with significant flaws in presentation and unclear results\", \"review\": \"The key claim of this paper is that debates between LLMs sourced from multiple providers improves robustness against adversarial attacks over debates with LLMs sourced from a single provider as was shown in [1].\", \"pros\": [\"investigating the impact of using a \\\"council\\\" of diverse LLMs makes sense as it may take advantage of different model's strengths in ways a simple self-reflective debate cannot.\", \"evaluates number of debate rounds as an important hyperparameter (but is not the first to do so: e.g. [1])\"], \"cons\": [\"only compares 3 model families, including custom & non-reproducible fine-tunes\", \"most results are inconclusive and noisy (e.g. Table 2, Ministral API scores jump from 0.0593 to 0.5929 and back to 0.0595 for steps 1,2, and 3), making them seem unreliable\", \"the main finding which is supported relatively well by the data (pairing a misaligned model with a harmless one will help make the misaligned one safer) is expected and not suprising\", \"confusing and ambiguous presentation (e.g., Table 1: what is \\\"ASR\\\" and \\\"GPT\\\"?, Table 2: what is \\\"GCG\\\"?, what is \\\"API\\\"? - is \\\"ASR\\\" in Table 1 the same as \\\"ASR[GPT] in table 2\\\"?) you mention that you use RoBERTa, Llama-Guard, GPT-4, and the protocols from [2] for evaluation, but it is not clear at all where those are used.\", \"L198-L200: you mention that you fine-tune models into harmless and harmful configurations, but provide no information at all as to how this is done in detail (no information data, training protocol, etc.)\", \"writing is wordy (e.g. L228-244 could be reduced to 3 sentences without losing important information)\", \"minor inconsistency issues (e.g. NeuralExec vs Neural Exec)\", \"We recommend that the authors considerably rework this paper to improve the presentation (especially in terms of clarity and reproducibility/detail) and to include additional, more robust experiments in more diverse settings (i.e. debates among models from more than two providers) to ensure that the results generalize.\", \"[1] Chern, Steffi, Zhen Fan, and Andy Liu. \\\"Combating Adversarial Attacks with Multi-Agent Debate.\\\" arXiv preprint arXiv:2401.05998 (2024).\", \"[2] Zou, Andy, et al. \\\"Universal and transferable adversarial attacks on aligned language models.\\\" arXiv preprint arXiv:2307.15043 (2023).\"], \"rating\": \"3\", \"confidence\": \"4\"}",
"{\"title\": \"Unsupported claims and poor writing\", \"review\": \"The paper aims to investigate multi-agent debate between different model families (\\\"cross-provider debate\\\") as a defense mechanism against adversarial prompting attacks, comparing it to single-provider debate. While this is a logical extension of existing work on single-provider debate and worth investigating, the paper falls significantly short of its stated goals.\\n\\n## Unsupported Central Claim\\n\\nThe most fundamental problem is that the authors fail to substantiate their main claim. In the abstract, they state: \\n\\n> \\\"Our results demonstrate that cross-provider debates with extended interaction periods achieve significantly lower toxicity scores than single-provider systems.\\\" \\n\\nHowever, the paper never presents or discusses any toxicity-related metrics in the main text. Not a **single** toxicity score is mentioned, with the focus exclusively on attack success rate (ASR). \\n\\nIt's frankly bizarre that toxicity is repeatedly mentioned as a key evaluation component without any corresponding results:\\n\\n> \\\"The execution framework maintains the structured guide-text implementation specified in the original Neural Exec methodology, ensuring architectural consistency while adapting payload content for focused **toxicity** evaluation.\\\"\\n\\n> \\\"We implement RoBERTa-based classification for fundamental **toxicity** assessment, augmented by Llama-Guard safety metrics for comprehensive security evaluation.\\\"\\n\\n> \\\"Furthermore, our evaluation metrics, while comprehensive, may not capture all relevant aspects of model behavior during debates. The focus on ASR and **toxicity metrics** could overlook other important dimensions of...\\\"\\n\\nIt\\u2019s possible that the values in the GCG columns of the tables represent toxicity scores. However, the authors never define this acronym, explain how these values were calculated, or discuss them anywhere in the text. Why include this data at all?\\n\\n## Inadequate Experimental Design\\n\\nEven if we charitably assume the authors meant to refer to ASR rather than toxicity in their abstract's main claim, their evidence remains inadequate. They compare their results to only a **single** baseline of single-provider debate, with just **two** examples of cross-provider debate claimed to either outperform or match single-provider setups:\\n\\n- Ministral [Harmful] vs Gemma [Harmless]: ASR reduction from 0.5985 to 0.076 over five rounds\\n- Llama-2-7b [Harmful] vs Ministral [Harmless]: ASR reduction from 0.7010 to 0.3670\\n\\nTheir baseline (Table 1) shows:\\n- Llama-2-7b [Harmful] vs Llama-2-7b [Harmless]: ASR reduction from 0.7929 to 0.399 over two rounds\", \"this_comparison_is_fundamentally_unfair_for_two_reasons\": \"1. Comparing cross-provider debate between Ministral [Harmful] and another model to single-provider debate between Llama models doesn't account for Ministral potentially being more susceptible to this debate strategy than Llama. A proper evaluation would establish single-provider baselines for multiple models and compare these to cross-provider results, including each model against itself in both scenarios.\\n\\n2. The authors compare five rounds of debate for the Ministral case to only two rounds for Llama. When looking at just two rounds, the results are actually comparable. For Llama vs Ministral, the ASR reduction is slightly **less** than the baseline and they don't include a table for this experiment, so we don't know how many rounds the data corresponds to.\\n\\n## Presentation and Writing Issues\\n\\nIn addition to these fundamental issues, the paper's poor writing and presentation would possibly still warrant rejection even if the claims were substantiated. The paper suffers from several significant problems:\\n\\n1. **Vague Methodology**: The paper doesn't provide adequate explanation on how the evaluation metrics are calculated or how and why they will be incorporated in the analysis e.g. \\n\\n > \\\"The evaluation protocol further incorporates GPT-4 based classification (Chao et al., 2023) for independent verification of model outputs\\\". \\n\\n2. **Undefined Terms**: The authors fail to define acronyms used in tables (ASR, GCG, API, ASR(GPT), GPT), or what the values actually represent.\\n\\n3. **Poor citation use**: There are several cases of poor citation practice, such as:\\n\\n > \\\"Cross-provider debates demonstrate varying effectiveness against Neural Exec attacks (Pasquini et al., 2024),\\\"\\n\\n This citation appears to support the claim, but actually refers to the Neural Exec paper that has nothing to do with debate and has already been cited earlier.\\n\\n4. **Verbose Writing**: The writing is unnecessarily complex and often difficult to follow. For example, section 3.3 reads:\\n\\n > \\\"Our evaluation methodology implements a comprehensive framework for assessing defensive effectiveness. The primary evaluation metrics extend beyond traditional toxicity assessment through the integration of multiple complementary evaluation systems (Zou et al., 2023). We implement RoBERTa-based classification for fundamental toxicity assessment, augmented by Llama-Guard safety metrics for comprehensive security evaluation (Wei et al., 2023). This multi-faceted approach enables nuanced analysis of defensive capabilities across varying attack conditions.\\n >\\n > The evaluation protocol further incorporates GPT-4 based classification (Chao et al., 2023) for independent verification of model outputs, providing additional validation of defensive effectiveness. Attack success rates are measured following established protocols (Zou et al., 2023), enabling systematic comparison with existing defensive mechanisms. This comprehensive evaluation framework enables detailed analysis of how architectural diversity and interaction protocols influence defensive capabilities against sophisticated adversarial attacks.\\n >\\n > Through this methodological framework, we systematically investigate the effectiveness of cross-provider debates while maintaining rigorous experimental controls and evaluation standards. The methodology enables detailed analysis of how model diversity, interaction protocols, and defensive mechanisms interact in complex adversarial environments, advancing our understanding of collaborative defense strategies in language model systems.\\\"\", \"this_entire_section_could_be_written_more_clearly_and_concisely_as\": \"\\\"We assess defense effectiveness using three complementary methods. First, we measure toxicity with RoBERTa-based classification and Llama-Guard safety metrics. Second, we use GPT-4 to independently verify model outputs. Third, we calculate attack success rates using established protocols from Zou et al. (2023). This approach allows us to analyze how different model architectures and debate interactions affect defense against adversarial attacks. By combining these metrics, we can make meaningful comparisons with existing defense mechanisms.\\\"\\n\\n\\n## Recommendations\", \"the_authors_would_benefit_from\": \"1. Clearly identifying and presenting the toxicity metrics that form the basis of their main claim\\n2. Designing fair comparisons with appropriate baselines across multiple models\\n3. Using more straightforward language and reducing over-the-top adjectives\\n4. Properly defining all terms and metrics used in tables and analysis\\n5. Focusing on communicating their methodology and contributions in plain, precise terms.\\n6. Ensuring round counts are consistent across experiments being directly compared\\n\\nAs it stands, the paper fails to demonstrate its central claim and suffers from significant methodological and presentation issues that undermine its scientific contribution.\", \"rating\": \"2\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Interesting work that built on top of Chern et al., 2023, but the analysis and the approach proposed aren't convincing enough to advance knowledge in the field\", \"review\": [\"Interesting work, but I don't see clear evidence of novel findings. Specifically, some of my reasons are:\", \"This work adopted the approach from Chern et al., 2023. Instead of using models from one model provider to run a multi-agent debate, this work uses models from different providers. If this is the paper's main contribution, I'm unsure if this is novel enough, even for a workshop paper.\", \"The claim of \\\"The combination of harmful-aligned Ministral with harmless-aligned Gemma achieves dramatic ASR reduction from 0.5985 to 0.076 across five debate rounds, surpassing baseline single-provider performance. \\\", this is not fair, right? The baseline single-provider performance shown only used 2 rounds, and if only comparing 2 rounds, the ASR reduction is not a big difference.\", \"I'm not sure this is a realistic evaluation if the models used in this paper, e.g., llama 2 7b and Ministral-8b, are so weak that no one uses them in real life. Why not test this method on SoTA models with the SoTA jailbreaking techniques to show this approach's effectiveness in safeguarding the models people actually use?\", \"The multi-agent debate takes time. It wouldn't be great if each user query took a long time to run a multi-agent discussion before responding. I would like to see the average time spent each round.\"], \"minor_thing\": \"- It is a bit uncommon to see so many citations in the abstract\\n\\n\\nHappy to raise my rating if the responses are convincing.\", \"rating\": \"4\", \"confidence\": \"3\"}"
]
} |
9obhyu9csa | Boosting Adversarial Robustness of Vision-Language Pre-training Models against Multimodal Adversarial attacks | [
"Youze Wang",
"Wenbo Hu",
"Qin Li",
"Richang Hong"
] | Vision-language pre-training (VLP) models, known for their generalization across multimodal tasks, are increasingly deployed in perturbation-sensitive environments, highlighting the need for improved adversarial robustness. Recent studies have revealed VLP models' vulnerability to multimodal adversarial attacks, which exploit interactions across multiple modalities to uncover deeper weaknesses than single-modal attacks. Methods like Co-attack, SGA, and VLP-attack leverage cross-modal interactions to more effectively challenge models' robustness. To counter these threats, adversarial fine-tuning has emerged as a key strategy. Our approach refines vision encoders using Multi-granularity Aligned Visual Adversarial Fine-tuning, which enhances robustness by expanding the vision semantic space and aligning features across perturbed and clean models. Extensive experiments demonstrate that our method offers superior robustness to multimodal adversarial attacks while preserving clean performance on downstream V+L tasks. | [
"vision-language pretraining models",
"adversarial fine-tuning"
] | Accept | https://openreview.net/pdf?id=9obhyu9csa | https://openreview.net/forum?id=9obhyu9csa | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"ndVLwdSrls"
],
"note_type": [
"decision"
],
"note_created": [
1741058076010
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
9367w3BSHC | AdvBDGen: A Robust Framework for Generating Adaptive and Stealthy Backdoors in LLM Alignment Attacks | [
"Pankayaraj Pathmanathan",
"Udari Madhushani Sehwag",
"Michael-Andrei Panaitescu-Liess",
"Furong Huang"
] | With the increasing adoption of reinforcement learning with human feedback
(RLHF) to align large language models (LLMs), the risk of backdoor installation
during the alignment process has grown, potentially leading to unintended and
harmful behaviors. Existing backdoor attacks mostly focus on simpler tasks, such
as sequence classification, making them either difficult to install in LLM alignment
or installable but easily detectable and removable. In this work, we introduce
AdvBDGen, a generative fine-tuning framework that automatically creates prompt-
specific paraphrases as triggers, enabling stealthier and more resilient backdoor
attacks in LLM alignment. AdvBDGen is designed to exploit the disparities in
learning speeds between strong and weak discriminators to craft backdoors that
are both installable and stealthy. Using as little as 3% of the fine-tuning data,
AdvBDGen can install highly effective backdoor triggers that, once installed, not
only jailbreak LLMs during inference but also exhibit greater stability against
input perturbations and improved robustness to trigger removal methods. Our
findings highlight the growing vulnerability of LLM alignment pipelines to ad-
vanced backdoor attacks, underscoring the pressing need for more robust defense
mechanisms. | [
"LLM Alignment",
"Backdoor"
] | Accept | https://openreview.net/pdf?id=9367w3BSHC | https://openreview.net/forum?id=9367w3BSHC | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"uM83VOeW85",
"tFsrrPd7p6",
"afzgGPhvGD",
"OdtQWgvIkC"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740310475044,
1740812811906,
1740478019020,
1741078308255
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission9/Reviewer_uEPu"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission9/Reviewer_8bya"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission9/Reviewer_N3wC"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Useful study on LLM alignment backdoor triggers\", \"review\": \"The authors present AdvBDGen, a framework that produces backdoor triggers in LLM alignment. The paper is well written, the authors show AdvBDGen generates effective triggers that are hard to detect via PPL analysis and prompt inspection. They also show the attack is transferrable across different models. I would like to flag that it is extremely hard to read the images and plots in the paper as the text is unnecessarily tiny.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Comments\", \"review\": \"Strengths:\\n1. Classic security problems in LLMs.\\n2. Introduces a novel method for adaptively and flexibly generating backdoor triggers, resulting in a more stealthy and robust backdoor attack.\\n3. Detailed experiment results that highlight the robustness, stealthiness and stability of this proposed trigger generation method.\", \"weaknesses\": \"1. Lacks comprehensive experiment of the utility performance of the backdoored model, leaving it unclear whether the proposed approach impacts the model\\u2019s responses when no trigger is in the prompt.\\n2. Lacks of the comparison result of other trigger generation methods. A lot of existing trigger generation methods are not compared with in this paper.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Promising direction, but presentation needs to be improved\", \"review\": \"The paper proposes a generative model (i.e., GAN) for crafting backdoor attacks on the fine-tuning dataset, targeting the RLHF step of LLMs (i.e., DPO). Unlike existing literature, which mostly focuses on \\\"constant\\\" triggers, the proposed method has the potential to adapt the trigger pattern to different input prompts, thus leading to higher attack stealthiness. In general, the paper provides a comprehensive study with detailed discussions and a set of arguments to advocate the proposed generative backdoor attack framework. In particular, I think that considering trigger variability and making the outcome of backdoor attacks \\\"probabilistic\\\" is a promising direction to study.\\n\\nHowever, the written quality and presentation of the paper can be significantly improved. I found a couple of vague terms (lack of clear definitions) and intuitive arguments (lack of concrete evidence to support the claim). For example, the terms \\\"Fortified\\\" and \\\"Fuzzy\\\" used in the title are difficult to parse and are not explained anywhere in the main paper. How do you define and control the strength of a discriminator? Regarding the threat model, the attacker's goal is described as inducing \\\"misaligned behavior,\\\" which is very generic. Can you give more specific examples of what you mean by misaligned behaviors and also motivate why this is a meaningful goal for the attacker? Note that this also affects the adopted evaluation protocols on attack success - Without a clear definition/description of \\\"misaligned behavior\\\", it is difficult to see whether the adopted evaluation metrics are reasonable.\\n\\nA major argument/motivation of the paper is that constant-trigger backdoor attacks can be easily detected or defended. While intuitively making sense, this argument should be supported with more concrete evidence to be convincing. To the best of my knowledge about the literature on backdoor defenses, this is a strong claim. What is the scope of the defenders/detectors that are applicable to your claim? You also consider the unlearning schemes as a backdoor defensive method, assuming that a set of triggered samples is known to the defender. Why is this assumption realistic? This seems to assume the involvement of human inspection as well, so how can your backdoor attack remain stealthy if a model trainer investigates the fine-tuning data (either using some automated tools like a clean reward model or checking the data manually)? Given that your attack flips the preference label, I don't see why it can be stealthier against human inspectors.\\n\\nThe existing literature has introduced the notion of dynamic backdoor attacks [1] and multi-target backdoor attacks [2], though they focus on other tasks. I suggest the authors include a discussion of how their approach differs from these works. In addition, several typos need to be fixed: Figure ?? in Line 366; test set., in Line 275; Appendix A and Appendix B are unconventional and difficult to read (also there are many typos in these sections). \\n\\n[1] Tuan Anh Nguyen and Tuan Anh Tran, Input-Aware Dynamic Backdoor Attack, NeurIPS 2020.\\n[2] Li et al., Multi-target Backdoor Attacks for Code Pre-trained Models, ACL 2023\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"This paper introduces AdvBDGen, a generative backdoor attack framework that adaptively crafts stealthy triggers for LLM alignment, improving robustness and transferability. While the approach is novel and well-executed, it lacks comparisons with existing trigger generation methods and does not fully evaluate the utility impact on non-triggered responses.\", \"title\": \"Paper Decision\"}"
]
} |
8iWFAzpNlx | Towards Unifying Interpretability and Control: Evaluation via Intervention | [] | With the growing complexity and capability of large language models, a need to understand model reasoning has emerged, often motivated by an underlying goal of controlling and aligning models.
While numerous interpretability and steering methods have been proposed as solutions, they are typically designed either for understanding or for control, seldom addressing both. Additionally, the lack of standardized applications, motivations, and evaluation metrics makes it difficult to assess methods' practical utility and efficacy.
To address the aforementioned issues, we argue that intervention is a fundamental goal of interpretability and introduce success criteria to evaluate how well methods can control model behavior through interventions. To evaluate existing methods for this ability, we unify and extend four popular interpretability methods—sparse autoencoders, logit lens, tuned lens, and probing—into an abstract encoder-decoder framework, enabling interventions on interpretable features that can be mapped back to latent representations to control model outputs.
We introduce two new evaluation metrics: intervention success rate and coherence-intervention tradeoff, designed to measure the accuracy of explanations and their utility in controlling model behavior. Our findings reveal that (1) while current methods allow for intervention, their effectiveness is inconsistent across features and models, (2) lens-based methods outperform SAEs and probes in achieving simple, concrete interventions, and (3) mechanistic interventions often compromise model coherence, underperforming simpler alternatives, such as prompting, and highlighting a critical shortcoming of current interpretability approaches in applications requiring control. | [
"mechanistic interpretability",
"evaluation",
"explainability",
"safety"
] | Reject | https://openreview.net/pdf?id=8iWFAzpNlx | https://openreview.net/forum?id=8iWFAzpNlx | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"uYVSBEMyg1",
"qqHBTiJcsN",
"O8fdl3nbVb",
"LBcRdqfJsE"
],
"note_type": [
"official_review",
"decision",
"official_review",
"official_review"
],
"note_created": [
1740878805224,
1741109665673,
1740915405732,
1740169369720
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission65/Reviewer_k4h2"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission65/Reviewer_qRdX"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission65/Reviewer_SMZz"
]
],
"structured_content_str": [
"{\"title\": \"Important direction, but the specific causal intervention evaluation does not provide a fair or accurate comparison of interpretability methods\", \"review\": [\"# Summary\", \"This paper compares several interpretability methods (sparse autoencoders, Logit Lens, Tuned Lens, and probing) by studying their effectiveness as causal intervention tools. The authors evaluate interventions by testing how often the model produces certain tokens in the answer to an open-ended question. They study the tradeoff between intervention effectiveness and output coherence. Across several model sizes, they find that logit-based interventions are most effective in producing specified tokens, though all methods generally underperform simple prompting baselines.\", \"# Strengths\", \"The paper is clearly written with a well-structured approach to comparing interpretability methods.\", \"The problem is well-motivated, addressing an important gap in how to evaluate and compare interpretability techniques.\", \"The unified encoder-decoder framework provides a useful abstraction for comparing methods with different underlying mechanisms.\", \"The coherence-intervention trade-off analysis provides valuable insights about the practical utility of these methods.\", \"# Weaknesses\", \"The definition of feature explanation \\u201ccorrectness\\u201d lacks nuance. The authors define correctness in terms of effect on the output. While some interpretability methods are designed to produce an effect on the output, others can be useful for providing insights about internal representations.\", \"While it is valuable to evaluate different interpretability methods based on their downstream effects, the evaluation should be put in the context of a downstream task. In this project, the downstream task is to output a certain token. This task naturally favors Logit Lens, which directly manipulates token probabilities. Other intervention types like SAEs or probes might represent high-level concepts or be less specific to a certain token, so their effectiveness should be measured on different types of downstream tasks. Although there is also some analysis of high-level concepts, their analysis is not as thorough.\", \"The paper is missing some important implementation details, like which SAE width they are using for the features listed. There is also no mention of how model generations are sampled (temperature, top-k, etc.), which significantly impacts both the success rate of interventions and output coherence measurements.\", \"The SAE features used are not very specific to the concepts they are evaluating them for (if I inferred correctly that they were using the 16k SAEs for Gemma 2 2b, which is not specifed). For example, the San Francisco feature for Gemma 2 2b activates on locations in the Bay Area in general and not only San Francisco, and similarly for the New York feature. This makes their evaluation metric hard to interpret when comparing different intervention methods.\", \"The SAE error should probably be added back to the decoded x\\u2019, although this doesn\\u2019t seem to have a huge effect on the output and this effect is analyzed in the appendix.\", \"The paper would benefit from more discussion of why prompting outperforms the mechanistic methods tested. While not surprising, it calls into question the effectiveness of these techniques at steering model outputs.\"], \"rating\": \"5\", \"confidence\": \"4\"}",
"{\"decision\": \"Reject\", \"comment\": \"The reviewers generally find the paper well-motivated, clearly structured, and a valuable contribution to evaluating interpretability methods through interventions. However, concerns remain regarding the fairness of comparisons, the premise that interpretability should always enable control, and the evaluation setup favoring certain methods (e.g., Logit Lens). Additionally, more discussion on why prompting outperforms mechanistic methods and greater focus on complex feature interventions would strengthen the impact.\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review for \\\"Towards Unifying Interpretability and Control: Evaluation via Intervention\\\"\", \"review\": [\"**Summary**\", \"This paper investigates the relationship between interpretability and control in large language models (LLMs), proposing that effective interpretability should enable precise interventions. The authors adapt four interpretability methods\\u2014Logit Lens, Tuned Lens, Sparse Autoencoders (SAEs), and Probing\\u2014for use in intervention scenarios. They introduce a unified encoder-decoder framework to integrate these methods and assess their effectiveness in enabling controlled modifications. Two new metrics, Intervention Success Rate (ISR) and Coherence-Intervention Tradeoff (CIT), are proposed to evaluate whether interpretability methods allow for controlled changes without compromising coherence. Experiments on open-weight models reveal that lens-based methods outperform SAEs and probing for simple interventions but often degrade coherence.\", \"**Strengths**\", \"**Adaptation of interpretability methods for intervention**: The authors put significant effort into adapting lens-based methods for intervention, such as finding pseudo-inverses, which is a notable contribution.\", \"**Fair comparison of methods**: The introduction of metrics like \\\"normalized edit distance\\\" ensures a fair comparison across interpretability methods. Such benchmarks could greatly benefit the interpretability research community.\", \"**Weaknesses**\", \"**Unfair comparison for SAEs**: The comparison may be biased against SAEs. The paper identifies SAE features for a concept by searching in Neuronpedia. Neuronpedia explains SAE features by analyzing the text that activates them using LLMs but does not consider their causal effects. Recent work, such as Chalnev et al. (2024), has improved steering with SAEs, and Paulo et al. (2024) have proposed a method to explain SAE features while accounting for their causal effects. These approaches should theoretically enhance the performance of SAEs for steering but are not considered in the paper, potentially skewing the results.\", \"**Questionable premise**: The claim that \\\"effective interpretability should enable precise interventions\\\" is debatable. Methods like SAEs were originally designed for understanding model internals, not for intervention. This premise may not align with the goals of all interpretability methods.\", \"**References**\", \"Chalnev, S., Siu, M., & Conmy, A. (2024). Improving Steering Vectors by Targeting Sparse Autoencoder Features. *arXiv preprint arXiv:2411.02193*.\", \"Paulo, G., Mallen, A., Juang, C., & Belrose, N. (2024). Automatically Interpreting Millions of Features in Large Language Models. *arXiv preprint arXiv:2410.13928*.\"], \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Ambitious goal for benchmarking interpretability methods through interventions, makes a good stride towards this goal. Experiments are compelling, but could target more abstract interventions.\", \"review\": [\"### Strengths\", \"This paper sets an ambitious and essential goal for benchmarking and evaluating interpretability research through interventions. A central difficulty in interpretability research is quantifying the effectiveness of interpretability technique, and this work makes significant strides in this direction.\", \"Experiments are extensive and well-explained. The use of human evaluators to validate the large-scale use of LLM checkers was a strong inclusion. I also particularly enjoyed the inclusion of *Intervention Similarity Between Methods*, to better understand the commonalities between intervention methods.\", \"### Weaknesses\", \"Instead of exploring the listed 10 intervention topics, I found the most compelling evaluation to be in Section 4.4 Complex features. These evaluations are more in line with the stated goals of improving safety and debiasing. In my opinion, interventions on complex features of these sorts should have been the primary focus of the experiments/benchmarks.\", \"I am not certain what to take away from this paper in terms of comparison of intervention methods. I think the paper's conclusions could have been expanded on.\", \"(Minor) There are a few formatting choices in this paper that I found slightly distracting. I've included some suggested changes below to improve clarity.\", \"### Suggestions\", \"I had difficulties understanding Figure 1 when I first encountered it, as I found it a bit overwhelming. I suggest that the table from Figure 1 be moved to Section 3.\", \"I found Figures 2 and 3 to be a bit too small. If kept at their current dimensions, I would suggest increasing the thickness of the lines.\", \"The **Interventions Topics** paragraph (lines 290-300) could be rewritten for better clarity.\", \"Llama3.1-8b is listed as the evaluator on lines 256 and 258, but the caption for Table 1 lists Llama3-8b\", \"Appendix header is listed twice.\", \"Appendix figures should be reformatted to better fill the pages.\"], \"rating\": \"6\", \"confidence\": \"3\"}"
]
} |
7z49r7RhP2 | MKA: Leveraging Cross-Lingual Consensus for Model Abstention | [
"Sharad Duwal"
] | Reliability of LLMs is questionable even as they get better at more tasks. A wider adoption of LLMs is contingent on whether they are usably factual. And if they are not factual, on whether they can properly calibrate their confidence in their responses. This work focuses on utilizing the multilingual knowledge of an LLM to inform its decision to abstain or answer when prompted. We develop a multilingual pipeline to calibrate the model's confidence and let it abstain when uncertain. We run several multilingual models through the pipeline to profile them based on various metrics, across different languages. We find that the performance of the pipeline varies by model and language, but that in general they benefit from it. This is evidenced by the accuracy improvement of $71.2$% for Bengali over a baseline performance without the pipeline. Even a high-resource language like English sees a $15.5$% improvement. | [
"model abstention",
"factuality",
"multilingual models",
"cross-lingual consensus",
"reliability",
"hallucination"
] | Accept | https://openreview.net/pdf?id=7z49r7RhP2 | https://openreview.net/forum?id=7z49r7RhP2 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"vaXOSE0ihy",
"sxwlUD7QGs",
"qvJHlcSpzG",
"DF8ZLUsgyL"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740870348642,
1740957978574,
1739740318213,
1741156098491
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission146/Reviewer_45Ps"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission146/Reviewer_Mu4v"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission146/Reviewer_yfAc"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"The paper improves LLM confidence calibration but needs clearer novelty and evaluation justification.\", \"review\": [\"## Summary\", \"The paper presents a novel approach to model confidence estimation by leveraging the multilingual capabilities of LLMs. The proposed Multilingual Knowledge Abstention (MKA) pipeline translates questions into a group of auxiliary languages, generates responses in these languages, and then uses a centroid-based cosine similarity method to assess confidence. The approach is evaluated on multiple multilingual models using multiple-choice question answering (MCQA) tasks. Results demonstrate notable accuracy improvements, particularly for low-resource languages, supporting the claim that cross-lingual knowledge enhances abstention-based reliability.\", \"## Strengths\", \"Relevance: The paper tackles an important issue in LLM trustworthiness\\u2014calibrating confidence to enable abstention when uncertain. This aligns well with the objectives of the workshop.\", \"Approach: The idea of explicitly prompting LLMs in auxiliary languages to better assess confidence is interesting, particularly given prior findings that LLMs do not implicitly leverage cross-lingual knowledge.\", \"Strong Empirical Results: The evaluation shows substantial improvements over baselines in multiple languages, with particularly high gains for low-resource languages.\", \"Clarity & Structure: The methodology is clearly structured, making it easy to follow the pipeline\\u2019s step-by-step execution. The explanation of confidence cutoffs and their impact is especially useful.\", \"## Weaknesses\", \"Comparison with Prior Work: The paper mentions Feng et al., 2024a, which also explores multilingual feedback for abstention, but does not clearly differentiate how the proposed method advances beyond it. A direct comparison or discussion clarifying the novelty in methodology or performance would strengthen the paper.\", \"Evaluation Metric Choice: The use of cosine similarity between sentence embeddings for evaluating correctness is somewhat questionable, given that the dataset consists of multiple-choice questions where correctness could be directly assessed. A justification for this choice would be helpful, especially since sentence similarity is not always reliable for evaluating factual correctness.\", \"Minor Issues:\", \"The acronym MCQA is used without definition in the text.\", \"Line 324 should read \\\"We used *a* baseline\\\" instead of \\\"We used *an* baseline\\\".\", \"## Overall Evaluation\", \"This paper presents a compelling approach to improving LLM confidence calibration via multilingual prompting. The results show benefits for low-resource languages, making this an impactful contribution to improving model reliability. However, a clearer differentiation from prior work (particularly Feng et al., 2024a) and a stronger rationale for the evaluation methodology would improve the paper.\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"MKA: Leveraging Cross-Lingual Consensus for Model Abstention\", \"review\": \"This paper investigates methods for improving machine translation in low-resource languages without direct parallel corpora. The authors propose leveraging multilingual LLMs to bridge the gap between high-resource and no-resource language pairs through intermediary languages. The study evaluates various translation strategies and benchmarks them against existing methods.\\n\\nStrengths\", \"important_problem\": \"The paper tackles the critical challenge of improving machine translation for no-resource languages, which is a significant step toward linguistic inclusivity.\", \"novel_approach\": \"The authors explore creative ways to use multilingual LLMs for indirect translation, potentially opening new avenues in translation research.\", \"strong_experimental_setup\": \"The evaluation includes multiple language pairs and a range of benchmarks, making the findings more generalizable.\", \"quantitative_and_qualitative_insights\": \"The paper provides both statistical evaluation and qualitative error analysis, strengthening its conclusions.\\n\\nWeaknesses\", \"dependence_on_intermediate_languages\": \"The approach heavily relies on intermediate languages, which may introduce compounding errors and degrade translation quality.\", \"lack_of_baseline_comparisons\": \"While the study presents novel methods, a clearer comparison with state-of-the-art no-resource translation systems would help contextualize improvements.\", \"computational_costs\": \"The paper does not address the efficiency or scalability of using multilingual LLMs for indirect translation, which may be a concern for real-world applications.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Review\", \"review\": \"The paper focuses on the problem of getting LLMs to properly qualify their responses based on their confidences, and abstaining entirely when they are not confident of their response. This is a valuable area of research, and very relevant to the topic of the workshop. The paper builds on the existing work of Feng et al (2024a) and Feng et al (2024b), which suggest estimating confidence via \\u201ccross-lingual consensus\\u201d, i.e. prompting the LLM in multiple languages and seeing if they agree.\\n\\nHowever, the contribution of this paper over these previous works does not seem significant to me. The main contribution claimed by the paper is the \\u201cMKA pipeline\\u201d, a systematic implementation of the cross-lingual consensus proposal, but the implementation of this pipeline is quite basic and it is not clear to me what the new non-trivial insight is.\\n\\nIn order to empirically justify such a pipeline, one would have to demonstrate that (e.g.) it abstains more when it is incorrect, etc. While they try to show this, their evaluation metric to assess if the model\\u2019s answer is correct or incorrect is to simply calculate its cosine-similarity with the \\u201cmodel answer\\u201c, which seems weak.\\n\\nTo make this a better paper, the authors should:\\n\\n1) Evaluate on a task where ground truth is more easily available, e.g. question-answering or coding, or use a more robust and testable method of checking if a model\\u2019s answer is correct, or consider a setting like forecasting and measure how much money the model could lose to arbitrage due to making uninformed overconfident bets. \\n\\n2) Benchmark their pipeline against alternative methods of confidence qualification. There is a vast literature on this that is ignored in this paper, some links below, but I\\u2019m sure there are more.\", \"https\": \"//github.com/xjdr-alt/entropix\\n\\nTL;DR Getting LLMs to qualify their responses with confidence levels is an important problem, but I\\u2019m not really sold on the general direction of using cross-lingual consensus for this, and especially not on the value and contribution size of this particular paper. To defend this better, the authors should systematically evaluate their pipeline against alternative proposals, and also use a more sensible evaluation metric than \\u201ccosine similarity with model answer\\u201d.\", \"rating\": \"4\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"comment\": \"This paper explores confidence calibration in multilingual LLMs, proposing a Multilingual Knowledge Abstention (MKA) pipeline that uses cross-lingual consensus to determine when a model should abstain from answering. The problem is highly relevant to LLM trustworthiness, and the results show notable improvements in accuracy, particularly for low-resource languages. Reviewer 1 (R1) finds the problem important and the experimental setup strong, but notes concerns about intermediate language reliance and missing baseline comparisons. Reviewer 2 (R2) appreciates the novel approach but questions its distinction from prior work (Feng et al., 2024a) and the choice of cosine similarity for correctness evaluation. Reviewer 3 (R3) raises stronger concerns about novelty, arguing that the contribution does not go beyond prior cross-lingual confidence estimation work, and suggests evaluating against alternative confidence metrics. While the novelty concerns are valid, the paper presents important results and a relevant direction for improving LLM trustworthiness. Given its relevance and empirical impact, I recommend acceptance as a workshop paper to encourage further discussion and refinement.\", \"title\": \"Paper Decision\"}"
]
} |
6uyczU6S2M | Automated Red Teaming with GOAT: the Generative Offensive Agent Tester | [
"Maya Pavlova",
"Erik Brinkman",
"Krithika Iyer",
"Vítor Albiero",
"Joanna Bitton",
"Hailey Nguyen",
"Cristian Canton Ferrer",
"Ivan Evtimov",
"Aaron Grattafiori"
] | Red teaming assess how large language models (LLMs) can produce content that violates norms, policies, and rules set forth during their safety training. However, most existing automated methods in literature are not representative of the way common users exploit the multi-turn conversational nature of AI models. While manual testing addresses this gap, it is an inefficient and often expensive process. To address these limitations, we introduce the Generative Offensive Agent Tester (GOAT), an automated agentic red teaming system that simulates plain language adversarial conversations while leveraging multiple adversarial prompting techniques to identify vuLnerabilities in LLMs. We instantiate GOAT with 7 red teaming attacks by prompting a general purpose model in a way that encourages reasoning through the choices of methods available, the current target model’s response, and the next steps. Our approach is designed to be extensible and efficient, allowing human testers to focus on exploring new areas of risk while automation covers the scaled adversarial stress-testing of known risk territory. We present the design and evaluation of GOAT, demonstrating its effectiveness in identifying vulnerabilities in state-of-the-art LLMs, with an ASR@10 of 96% against smaller models such as Llama 3.1 8B, and 91% against Llama 3.1 70B and 94% for GPT-4o when evaluated against larger models on the JailbreakBench dataset. | [
"red teaming",
"adversarial machine learning",
"adversarial examples",
"attacks on language models"
] | Accept | https://openreview.net/pdf?id=6uyczU6S2M | https://openreview.net/forum?id=6uyczU6S2M | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"wKSFDfq2lE",
"NXzl27wfVf",
"DtnKEL8AWr",
"AyNNEL8io3"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740289285032,
1740826405449,
1740826223242,
1741083202918
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission32/Reviewer_C6tr"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission32/Reviewer_oHeE"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission32/Reviewer_nMUL"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"review of submission #32\", \"review\": \"This paper proposes GOAT, an automated attack for multi turn jailbreaking that outperforms baselines. the method demonstrates strong results on a variety of LLMs. The method is intuitive and seems similar to PAIR, a baseline for single turn jailbreaks, so GOAT could be a similar baseline.\\n\\nStrengths\\nStrong results\\nIntuitive, simple method\\n\\nWeaknesses\\nEvaluation seems somewhat limited, comparing to only one baseline (Crescendo). I wonder if there could be a comparison to single turn attacks and better analysis of why multi turn is stronger, or human multi turn jailbreaking.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Good paper\", \"review\": [\"strengths:\", \"interesting method for multiturn red teaming, resulting in high ASR for SOTA models\", \"Relevant findings show that over time (more turns), ASR increases. This makes sense given that the previous context pushes the probability for the next tokens to be in a very narrow space.\"], \"weaknesses\": [\"It seems unclear how long-lasting this result it, it seems like this type of attack can be pretty easily mitigated\", \"more attack scenarios beyond refusal suppression should be investigated\"], \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"GOAT is a strong multi-turn red teaming framework that outperforms Crescendo in attack success rate and efficiency, but should be benchmarked against CFA for a more robust comparison. Clarifications on attacker LLM, strategy selection, and Judge LLM role would strengthen the paper. Rating: 7 (Good paper, accept)\", \"review\": \"### Summary\\n\\nGOAT (Generative Offensive Agent Tester) is an automated multi-turn red teaming framework designed to test the security of LLM chatbots by dynamically selecting from seven adversarial prompting strategies. It demonstrates a high attack success rate against LLaMA models and achieves its objective within an average of 5 conversation turns\\u2014significantly faster than similar frameworks like Crescendo, which can take up to 10 turns. GOAT employs an attacker LLM to generate adversarial prompts, drawing attack objectives from JailbreakBench and leveraging Chain of Thought reasoning. The paper reports superior Attack Success Ratios (ASR) compared to Crescendo across multiple target models, including LLaMA 2 7B Chat, LLaMA 3.0 8B Instruct, LLaMA 3.1 8B Instruct, LLaMA 3.1 70B Instruct, as well as three widely used instruction-tuned GPT models\\u2014GPT-4o, GPT-4-Turbo, and GPT-3.5-Turbo. The method requires fewer total LLM queries per attack compared to Crescendo, reducing computational costs.\\n\\n### Strengths\\n\\nImpressive ASR for models in least no of turns compared to other methods\\nExplore various prompting strategies dynamically\\nThrough benchmarking against CRESCENDO\\nGOAT outperforms crescendo\\n\\n### Weaknesses \\n\\nShould be benchmarked against other multi-turn LLM-based frameworks like CFA(Multi-Turn Context Jailbreak Attack on Large Language Models From First Principles), which is more stealthy and needs less no of turns than CRESCENDO (5-12 turns) to strengthen paper\\u2019s claims\\n\\nIt is not clear what attacker LLM is used for Figure 1. As B.1 mentions that off-the-shelf LLM GPT-4o is used, is the attacker LLM fine-tuned? More details on this in the paper will provide clarity to the reader.\\n\\nAs dynamically changing prompting strategies is one of the highlights of this method, more information is needed on how the strategy is selected. \\u201cFor multiple attack insertions, as conducted for these experiments, repeat the attack placeholders for each attack\\u201d looks like different attacks are manually inserted one by one.\\n\\nThe role of Judge LLM in GOAT is unclear. Is it only used to do final ASR evals? Also considering Llama models have strong safety alignment, was any manual quantitative review done for ASR?\\n\\n### Clarifications that did not affected evaluation\\n\\nOther relevant benchmark which is more recent is APRT(Automated Progressive Red Teaming) \\nWill code be released?\\nHow will increasing context length from 4096 affect comparisons with crescendo?\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
5DfhoxRPXh | Private Retrieval Augmented Generation with Random Projection | [
"Dixi Yao",
"Tian Li"
] | Large Language Models (LLMs) have gained widespread interest and driven advancements across various fields. Retrieval-Augmented Generation (RAG) enables LLMs to incorporate domain-specific knowledge without retraining. However, evidence shows that RAG poses significant privacy risks due to leakage of sensitive information stored in the retrieval database. In this work, we propose a private randomized mechanism to project both the queries and the datastore into a lower-dimensional space using Gaussian matrices, while preserving the similarities for effective retrieval. Empirical evaluation on different RAG architectures demonstrates that our solution achieves strong empirical privacy protection with negligible impact on generation performance and latency compared to prior methods. | [
"Differential Privacy; Large Language Model; Retrieval-Augmented Generation"
] | Accept | https://openreview.net/pdf?id=5DfhoxRPXh | https://openreview.net/forum?id=5DfhoxRPXh | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"nxj8W5x7Ro",
"gIuJFqcXlW",
"OWy7pPaHNV",
"1JpV3MR33B"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740899874546,
1740848161503,
1740810731894,
1741103342807
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission33/Reviewer_gLyL"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission33/Reviewer_cbZG"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission33/Reviewer_dGxa"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": [\"### Summary\", \"The paper proposes a differentially private mechanisms for RAG using random projection. The key motivation is the privacy risks associated with RAG systems, where sensitive information in retrieval databases might be leaked. The paper introduces a randomized projection mechanism based on Gaussian matrices to project both queries and the datastore into a lower-dimensional space while preserving retrieval effectiveness. The mechanism is evaluated across KNN-LM and direct prompting architectures.\", \"### Strengths\", \"The paper is well-organized for the most part and investigates an important privacy concern in the context of LLM-based RAG.\", \"The paper is technically sound in its description of problem formulation and has theoretical justified approach to the problem.\", \"### Weaknesses\", \"The paper primarily evaluates the proposed method on the Enron Email dataset, which is a relatively small-scale dataset. However, in real-world applications, RAG systems are often deployed on much larger document collections. How does the random projection method scale when the number of indexed documents increase drastically? How does the runtime for projection and retrieval change with dataset size?\", \"While the paper includes a proof sketch for Theorem 1, some steps in the derivation are a bit difficult to follow. Providing a more detailed breakdown could improve readability.\", \"The writing overall could be improved for clarity and coherence to enhance presentation and readability.\"], \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"This paper proposes a differentially private (DP) mechanism for Retrieval-Augmented Generation (RAG) systems using Gaussian random projection to protect sensitive data in retrieval databases. By projecting queries and datastore embeddings into a lower-dimensional space, the method achieves DP guarantees (\\u03b5 \\u22485) while maintaining retrieval effectiveness and generation quality.\", \"review\": \"## Quality & Clarity:\", \"the_paper_addresses_a_critical_challenge_in_rag_systems\": \"privacy leakage from retrieval databases. The methodology is clearly described, with a theoretical DP guarantee (Theorem 1) and empirical validation on KNN-LM and direct-prompting architectures. The writing is structured, though some sections (e.g., the proof sketch in \\u00a74.2) could benefit from expanded explanations.\\n\\n## Originality:\\nThe use of Johnson-Lindenstrauss transforms for DP in RAG is novel. Prior work applied DP to RAG via token-level noise (Koga et al., 2024) or synthetic data (Zeng et al., 2024a), but this paper\\u2019s random projection approach offers a computationally efficient alternative.\\n\\n## Significance:\\nThe method\\u2019s low overhead (0.811s vs. 0.793s baseline latency) and strong privacy-utility tradeoff make it practical for real-world deployment. Results show 0 leaked emails/phone numbers in KNN-LM and direct-prompting setups (Tables 1\\u20132), outperforming baselines like DP-RP-G1.\\n\\n### Pros:\\n- Novel application of random projection for DP in RAG.\\n- Strong empirical results: eliminates 100% of email/phone leaks while maintaining perplexity (e.g., 2.89 vs. 2.872 baseline in KNN-LM).\\n- Theorems formally link projection parameters (\\u03c3, k) to DP guarantees.\\n- Efficient implementation with minimal latency increase.\\n\\n### Cons:\\n- Limited comparison to other DP mechanisms (e.g., Laplace noise).\\n- Experiments use GPT-2; modern LLMs (GPT-4, Gemini) may behave differently.\\n- Theoretical analysis assumes normalized embeddings; real-world unnormalized data may affect results.\", \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"title\": \"The paper introduces a privacy-preserving retrieval-augmented generation method using random projection and differential privacy, demonstrating strong theoretical backing and practical efficiency while balancing privacy-utility trade-offs, but facing challenges in tuning complexity, dataset scope, and experimental diversity.\", \"review\": \"Summary [This paper presents a straightforward but effective solution for privacy-preserving retrieval-augmented generation by combining random projection (inspired by the Johnson\\u2013Lindenstrauss lemma) with differential privacy. The authors show that one can greatly reduce the leakage of sensitive information in RAG systems without incurring a large drop in model quality or speed. The method is especially relevant for enterprise or medical settings, where ensuring confidentiality of retrieved documents is crucial. Overall, the main contribution is a new mechanism for embedding-level differential privacy in RAG. It has strong theoretical backing, demonstrates promising results on a real-world sensitive dataset, and remains computationally efficient for practical deployment. However, as with most privacy techniques, there is a trade-off between utility and privacy, and future work could explore more adaptive approaches to mitigate performance trade-offs across various domains.]\\n\\nStrengths [-Clear Theoretical Guarantees: The paper gives a rigorous proof that random projection with a properly chosen variance can satisfy (\\u03b1, \\u03b5)-RDP and thereby (\\u03b5, \\u03b4)-DP. This is a solid theoretical grounding. -Preservation of Retrieval Quality: By using Johnson\\u2013Lindenstrauss\\u2013based random projection, the method largely preserves the distances between embeddings, thus minimizing the performance drop in retrieval tasks. -Minimal Overhead: Empirically, the added computational overhead is small (only slight increases in per-query latency). The paper's results show that perplexity remains close to the baseline, demonstrating the practicality of the method. -Broad Applicability: Demonstrated on two different RAG architectures (kNN-LM and direct-prompting). The approach is generic enough to be applied in many retrieval-based scenarios.]\\n\\nWeaknesses\\n[-Dependency on Dimensionality and Noise: The success of the random projection depends on choosing an appropriate projection dimension k and noise variance \\\\sigma. If k is too small or noise is too high, the retrieval quality and overall model performance can degrade more severely. -Limited Range of Experiments: The paper focuses on GPT-2 and the Enron Email dataset. Results may vary for larger LLMs (e.g., GPT-3.5/4) or other private corpora with different distributional characteristics. -Scope of Protection: The work secures data retrieved from the datastore via embeddings. However, it does not address memorized data already within an LLM's parameters, which can be a separate vector of attack. -Potential Tuning Complexity: In practice, setting the privacy budget (\\\\epsilon, \\\\delta) and the associated noise scale \\\\sigma may require domain expertise and repeated experimentation to strike a balance between utility and privacy.]\", \"rating\": \"9\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
4ZrDvmPnKo | PRUNING AS A DEFENSE: REDUCING MEMORIZATION IN LARGE LANGUAGE MODELS | [
"Mansi Gupta",
"Nikhar Waghela",
"Sarthak Gupta",
"Shourya Goel",
"Sanjif Shanmugavelu"
] | Large language models have been shown to memorize significant portions of their training data, which they can reproduce when appropriately prompted. This work investigates the impact of simple pruning techniques on this behavior. Our findings reveal that pruning effectively reduces the extent of memorization in LLMs, demonstrating its potential as a foundational approach for mitigating membership inference attacks. | [
"pruning",
"memorization",
"LLMs"
] | Accept | https://openreview.net/pdf?id=4ZrDvmPnKo | https://openreview.net/forum?id=4ZrDvmPnKo | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"tMouCTiKtC",
"bjXGqZkBw8",
"WqgMqlYJFk",
"TvmB7mmBHj"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740094461434,
1740844940506,
1740856505278,
1740837447125
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission58/Reviewer_Fdn8"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission58/Reviewer_JPoU"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission58/Reviewer_HSFE"
]
],
"structured_content_str": [
"{\"title\": \"This paper investigates the impact of pruning techniques on reducing memorization in large language models (LLMs), addressing a critical issue in privacy and security. The study is well-structured, with clear methodology and comprehensive experiments using the Pythia family of models. The results demonstrate that pruning effectively reduces memorization while maintaining model performance, with global pruning and attention-layer pruning showing the most significant effects. The paper is original, relevant, and contributes valuable insights to the field. However, it is limited by a small dataset and reliance on perplexity as the primary performance metric. Overall, the paper is a strong candidate for acceptance, with potential for further exploration in future work.\", \"review\": \"Quality\\nThe paper is well-structured and presents a thorough investigation into the impact of pruning on memorization in large language models (LLMs). The methodology is sound, and the experiments are well-designed, leveraging the Pythia family of models and the Pile dataset. The results are clearly presented, with tables summarizing the reduction in memorization and the impact on perplexity. The paper also acknowledges its limitations and suggests directions for future work, which adds to its credibility.\\n\\nClarity\\nThe paper is generally clear and well-written. The abstract provides a concise overview of the study, and the introduction effectively sets the stage for the research. The methodology section is detailed and explains the pruning strategies and evaluation metrics clearly. However, some parts of the paper could benefit from more explicit explanations, particularly in the results section, where the implications of the findings could be discussed in greater depth.\\n\\nOriginality\", \"the_paper_addresses_a_significant_and_timely_issue_in_the_field_of_llms\": \"the memorization of training data and its implications for privacy and security. While pruning is not a new technique, its application to mitigate memorization in LLMs is novel and contributes to the ongoing discourse on model efficiency and privacy. The paper builds on existing work but offers new insights into how pruning can be used to reduce memorization.\\n\\nSignificance\\nThe findings of this paper are highly relevant to the field of machine learning, particularly in the context of privacy-preserving AI. The demonstration that pruning can effectively reduce memorization without significantly degrading model performance is a valuable contribution. This work has the potential to influence future research and practical applications in the development of more secure and efficient LLMs.\\n\\nPros\", \"novel_application\": \"The use of pruning to mitigate memorization in LLMs is innovative and addresses a critical issue in the field.\", \"comprehensive_experiments\": \"The paper presents a thorough evaluation of different pruning strategies across various model sizes.\", \"clear_results\": \"The results are well-presented and supported by detailed tables and analysis.\", \"future_work\": \"The paper acknowledges its limitations and suggests valuable directions for future research.\\n\\nCons\", \"limited_dataset\": \"The study is limited to 5,000 training samples, which may not be representative of larger datasets.\", \"performance_metrics\": \"The paper primarily uses perplexity to assess model performance. Incorporating additional metrics like ROUGE or BLEU scores could provide a more comprehensive evaluation.\", \"depth_of_discussion\": \"The implications of the findings could be discussed in greater depth, particularly in relation to existing literature and potential real-world applications.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"Great Potential for a future longer submission.\", \"review\": [\"## Summary\", \"This paper empirically evaluates the impact of various weight pruning strategies on a language model\\u2019s ability to memorize. The authors highlight a tradeoff between model performance and memorization, with certain pruning strategies achieving better balance.\", \"## Strengths\", \"**Clarity:** The paper clearly defines the problem and describes the experimental setup effectively.\", \"**Motivation:** The study is well-motivated, addressing a relevant problem with established techniques.\", \"**Potential:** If the limitations and future directions are addressed, the work has the potential to be a strong 8-page conference submission.\", \"## Weaknesses\", \"**Lack of Random baseline:** While some of the pruning strategies seem to be inspired by the literature, the reader could still benefit from understanding why intelligent strategies of pruning is advantageous in memorization. This could be done by introducing a random baseline that prunes random $n\\\\%$ of the weights.\", \"**Lacking analysis on the tradeoff:** The paper introduces the tradeoff between performance and memorization\\u2014high-performing models tend to memorize more. While empirical results hint at this, the evidence for selecting the best tradeoff strategy is insufficient. A stronger case could be made by evaluating each strategy across multiple pruning levels (beyond just two).\", \"## Recommendation\", \"**Decision**: **Accept**\", \"### Additional Feedback\", \"Expanding the dataset and incorporating additional evaluation metrics, as noted by the authors, would further strengthen the paper's contributions.\"], \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"This review highlights the paper\\u2019s experimental design and relevance to AI privacy but notes its limited model scope and absence of comparisons with other privacy techniques.\", \"review\": \"Summary [This paper explores pruning as a method to reduce memorization in large language models (LLMs) and improve privacy. The authors test layer-wise, global, and selective pruning on Pythia models (160M\\u201312B parameters) trained on The Pile dataset. They measure memorization reduction using extractability metrics and analyze the trade-offs between privacy and model performance. The results show that pruning attention layers reduces memorization the most, but also increases perplexity, lowering model accuracy. The study suggests global pruning is more effective than layer-wise pruning and that pruning deeper layers balances memorization reduction and performance.]\\n\\nStrengths\\n[-Addresses an important problem: LLM memorization risks privacy violations and security breaches. The paper provides a practical approach to reducing this risk. -Clear experimental design: The study tests multiple pruning methods across various model sizes and evaluates their impact on memorization and performance. -Strong empirical results: The findings show which pruning strategies work best and explain why attention layers store memorized data. -Relevant to AI privacy and safety: This research can help organizations deploying LLMs reduce legal and security risks related to data leakage.]\\n\\nWeaknesses\\n[-Limited model and dataset scope: The study only tests Pythia models on The Pile dataset, limiting generalizability to larger models. -Lack of performance vs. privacy trade-off discussion: The study shows that pruning harms accuracy but does not analyze how much trade-off is acceptable in real-world applications. -No comparison with other privacy techniques: The paper does not compare pruning with differential privacy, knowledge distillation, or other defense methods.]\", \"rating\": \"7\", \"confidence\": \"2\"}"
]
} |
4KoMbO2RJ9 | Why Are Web AI Agents More Vulnerable Than Standalone LLMs? A Security Analysis | [
"Jeffrey Yang Fan Chiang",
"Seungjae Lee",
"Jia-Bin Huang",
"Furong Huang",
"Yizheng Chen"
] | Recent research has significantly advanced Web AI agents, introducing groundbreaking architectures and benchmarks demonstrating major progress in autonomous web interaction and navigation. However, recent studies have shown that many AI agents can execute malicious tasks and are more vulnerable than standalone LLMs. Our work studies why Web AI agents, built on safety-aligned backbone Large Language Models (LLMs), remain highly susceptible to following malicious user inputs. In particular, we investigate the sources of these vulnerabilities by analyzing the differences between Web AI agents and standalone LLMs in terms of their design and components, quantifying the vulnerability rate introduced by each component. Through a fine-grained evaluation to uncover nuanced jailbreaking signals, we identify three key factors in Web AI agents that make them more vulnerable than standalone LLMs: 1) directly including user input in the system prompt of LLMs, 2) generating actions in a multi-step manner, and 3) processing Event Streams (observation + action history) from web navigation. Furthermore, we observe that many current benchmarks and evlautions rely on mock-up websites, which could potentially lead to misleading results. Our findings highlight the need to prioritize security and robustness when designing the individual components of AI agents. We also suggest developing more realistic safety evaluation systems for Web AI agents. | [
"AI Agent",
"Web AI Agent",
"LLM",
"Jailbreaking"
] | Accept | https://openreview.net/pdf?id=4KoMbO2RJ9 | https://openreview.net/forum?id=4KoMbO2RJ9 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"rKep1gGh9i",
"crxtDN7l8g",
"MiknhDoEIP",
"FxvRTPqXvt"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740898115802,
1740896580605,
1741099978028,
1740927109861
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission98/Reviewer_8u3R"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission98/Reviewer_WVKn"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission98/Reviewer_x3Us"
]
],
"structured_content_str": [
"{\"title\": \"This paper examines why web-AI agents are more vulnerable to malicious prompts than standalone LLMs, offering an insightful evaluation framework with empirical evidence.\", \"review\": \"As the demand for AI agents rises, it is essential to understand why AI agents are susceptible to malicious user prompts. This paper addresses this issue by investigating why web-AI agents are more susceptible to following malicious prompts than their safety-aligned standalone LLMs.\\n\\nI believe the paper has done a very good job of assessing the key factors contributing to fragility and has presented a fine-grained evaluation framework with empirical evidence.\\n\\nI think the paper would have been stronger with a larger sample size and by using a more diverse set of LLMs, rather than just GPT-4.\\n\\nThis paper aligns with the workshop objectives and will clearly benefit the research community.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"Evaluating Web AI Agent Vulnerabilities \\u2013 Valuable but with Some Limitations\", \"review\": \"Overview:\\nThe paper presents valuable insights into the security risks of Web AI agents and introduces a novel fine-grained evaluation method. However, for a workshop submission, expanding the experiments, testing across different models, and refining the visual presentation would still be necessary.\", \"pros\": [\"Identifies key vulnerabilities in Web AI agents, providing meaningful security insights.\", \"Introduces a fine-grained harmfulness evaluation framework, improving jailbreak assessment.\", \"Well-structured ablation studies.\"], \"cons\": [\"Limited Scale of Experiments: The evaluation primarily relies on one model (GPT-4o-2024-0806). Given the diversity of Web AI agents, evaluating on multiple models (e.g., Claude, Gemini, open-source models) would strengthen the claims. The number of test cases appears to be limited (10 harmful requests across different categories). A larger dataset would provide more robust results.\", \"Lack of Clarity in Figure 2: Figure 2 plays a crucial role in illustrating the differences between Web AI agents and standalone LLMs, but it lacks clarity and a professional layout. A more visually structured version with clearer labels and explanations would improve readability.\", \"Unclear Definition of \\\"Real Websites\\\": The authors mention testing on \\\"real websites\\\" but do not clearly define what those websites are or how they were selected. It is also unclear whether the AI agent was fully interacting with the real web or whether interactions were limited in some way (e.g., sandboxed environments).\"], \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Interesting paper of value to the community but evaluations can be much more comprehensive\", \"review\": \"This paper prompts web agents from the OpenHands framework to carry out malicious requests and observes differences in jailbreaking success rate of a standalone LLM versus one in the agentic framework.\", \"pros\": [\"interesting study with a well-designed set of ablations for this particular agentic framework\"], \"cons\": [\"The authors should baseline with a stronger jailbreaking dataset. GPT-4o is far from being a model that achieves 0% jailbreaking success rate. A quick Google search for \\\"gpt-4o jailbreaks\\\" identifies multiple claimed successful jailbreaks and there are works like https://arxiv.org/abs/2404.02151 that even claim a 100% success rate.\", \"The authors should include other agentic frameworks and add ablations for those other possible system design choices. For example, Anthropic provides a reference implementation for their computer use agent that is much simpler and does not include obviously bad design choices like embedding user requests in the system prompt: https://github.com/anthropics/anthropic-quickstarts/tree/main/computer-use-demo Another popular web agentic framework to include is VisualWebArena: https://arxiv.org/abs/2401.13649 WindowsArena also provides a popular scaffolding: https://arxiv.org/abs/2409.08264v1 Finally, official implementations from OpenAI like Operator (https://openai.com/index/introducing-operator/) would be interesting to test once they are broadly available.\", \"The authors should include more and different backbone models instead of only GPT-4o.\"], \"rating\": \"6\", \"confidence\": \"4\"}"
]
} |
4E4KQoIyo4 | Order Independence With Finetuning | [] | Large language models (LLMs) demonstrate remarkable performance on many NLP tasks, yet often exhibit order dependence: simply reordering semantically identical tokens (e.g., answer choices in multiple-choice questions) can lead to inconsistent predictions. Recent work proposes Set-Based Prompting (SBP) as a way to remove order information from designated token subsets, thereby mitigating positional biases. However, applying SBP on base models induces an out-of-distribution input format, which can degrade in-distribution performance. We introduce a fine-tuning strategy that integrates SBP into the training process, “pulling” these set-formatted prompts closer to the model’s training manifold. We show that SBP can be incorporated into a model via fine-tuning. Our experiments on in-distribution (MMLU) and out-of-distribution (CSQA, ARC Challenge) multiple-choice tasks show that SBP fine-tuning significantly improves accuracy and robustness to answer-order permutations, all while preserving broader language modeling capabilities. We discuss the broader implications of order-invariant modeling and outline future directions for building fairer, more consistent LLMs. | [
"large language models",
"order dependence",
"trust",
"fairness",
"finetuning",
"multiple choice questions"
] | Reject | https://openreview.net/pdf?id=4E4KQoIyo4 | https://openreview.net/forum?id=4E4KQoIyo4 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"hOE3uxfC2X",
"ccyGxst7Oz",
"S7rzsGO7pF",
"7rRUSAOZfb"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740966377008,
1740887900321,
1741083397422,
1740863066898
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission35/Reviewer_4Ri4"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission35/Reviewer_EXEG"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission35/Reviewer_c1Cv"
]
],
"structured_content_str": [
"{\"title\": \"Limited novelty, but a neat paper with strong trends in the results\", \"review\": [\"### Summary:\", \"SBP removes unwanted order dependence in MCQs. However, SBP is not in the training data distribution. This paper therefore investigates finetuning so that SBP can be used without the format being out of distribution for the LLM. They investigate both normal CE and also a contrastive loss, and find that the contrastive loss works significantly better.\", \"### Strengths:\", \"Clearly written.\", \"Carefully constructed experiments and strong trends in the results.\", \"I appreciated the 4.6 Summarization Task paragraph. There should be more encouragement for papers to explain things that did not work, as there is often a lot that can be learned here.\", \"### Weaknesses:\", \"Limited novelty, but in general a very neat paper.\", \"Why is figure 1 labelled as being specific to llama7b-base? Is the figure not illustrating the general principle of SBP?\", \"Line 86-7: Figure 1 doesn\\u2019t really visualize what is stated (\\u201cas visualized in Figure 1, SBP applies (1) modified attention masks that do not enforce strict left-to-right order within certain sub-sequences, and (2) identical or parallel positional embeddings for tokens in that sub-sequence.\\u201d)\", \"The paper would benefit from testing on another family of models and checking whether the same trends hold.\", \"What is meant by \\u201cmodified accuracy\\u201d?\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"This paper addresses the critical issue of order dependence in LLMs for multiple-choice QA tasks.\", \"review\": \"### **Summary**\\n\\nThis paper addresses the critical issue of order dependence in LLMs for multiple-choice QA tasks. The authors propose fine-tuning models with Set-Based Prompting (SBP) and a margin-based contrastive loss, demonstrating improved robustness to answer permutations while preserving general language capabilities. The approach is well-motivated, methodologically sound, and empirically validated, though its applicability beyond QA tasks remains unproven.\\n\\n### **Strengths** \\n1. **Practical Focus**: Addresses a known issue (order dependence) in LLMs, relevant for QA reliability. \\n2. **Empirical Validation**: Demonstrates improved robustness to permutations on MMLU, CSQA, and ARC benchmarks. \\n3. **Parameter Efficiency**: Uses LoRA for fine-tuning, reducing computational costs. \\n\\n### **Weaknesses** \\n1. **Incremental Contribution**: Integrates existing techniques (SBP + contrastive loss) without novel algorithmic innovation. Merely fine-tuning on SBP-formatted data is a straightforward solution to prior SBP limitations. \\n2. **Narrow Scope**: Fails to generalize beyond QA tasks (e.g., summarization performance degrades under SBP). No exploration of tasks like ranking or dialogue. \\n3. **Technical Superficiality**: \\n - SBP implementation details (e.g., attention masking, positional embeddings) are glossed over, hindering reproducibility. \\n - Fixed margin (1.0) in the contrastive loss lacks justification; no ablation on margin sensitivity. \\n4. **Limited Data Diversity**: Fine-tuned exclusively on MMLU, raising doubts about cross-domain robustness. \\n5. **Overstated Significance**: Improvements are confined to specific QA setups, offering no broader insights into LLM invariance or bias mitigation.\", \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"comment\": \"The novelty of this work is very limited\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review for Submission Number 35\", \"review\": \"This paper proposes to improve the performance of LLMs at MCQs by fine-tuning with set-based prompting - the removal of positional information from certain tokens in the prompt (in the case of MCQs, the tokens corresponding to the answer choices) in order to avoid the well-known issue of order-sensitivity in LLMs.\\n\\nThe authors fine-tune with Llama-2-7B and Llama-2-7B-Chat on MMLU and evaluate performance on MMLU, CSQA and ARC - the latter two datasets acting as tests of generalisation of the method. Moreover, the authors examine the usage of a margin-based contrastive loss function that seeks to maximise the margin between the probability of the correct answer and the probability of the most likely wrong answer, as opposed to the typical cross-entropy loss function.\\n\\n## Strengths:\\n1. The paper is interesting and seeks to solve an important issue with LLMs.\\n\\n## Weaknesses:\\n\\nOverall, the paper suffers from some technical flaws.\\n\\n1. The methodology for set-based prompting/finetuning is not clearly stated in this paper - only a relatively terse description is given in Section 2.2. Although this is sufficient for an experienced reader to intuit the likely method applied, the approach of set-based prompting is far from standard enough to merit such a brief and high-level description.\\n2. Although the contrastive loss of lines 224-225 is indeed differentiable as claimed, it is only differentiable w.r.t. the max over $n_i$ - resulting in a single answer\\u2019s probability being (directly) increased or decreased per gradient step. It would be significantly more efficient to use a smooth approximation to the max function here.\\n3. Figure 2, which is the crux of the paper\\u2019s contribution, shows results on Llama-2-7B; the methodology for prompting with the question is not provided, but it does not make sense to use a base model for zero-shot QA (I suspect this is why an inexplicable result such as \\u2018base set based prompting\\u2019 being just as good as \\u2018base standard prompting\\u2019 on CSQA is found). As such, I discount entirely those results and focus on Figure 4, the results for Llama-2-7B-Chat, which is only presented in an Appendix.\\n4. Training with cross-entropy loss without set-based-prompting (what the authors label as \\u2018cross-entropy-control\\u2019) results in significant accuracy drops for both Llama-2-7B base and Llama-2-7B-chat across all tasks. The magnitude of accuracy drop is startling. This result makes little sense, and makes me doubt the integrity/correctness of the other results presented. This is my most serious concern with this paper.\\n5. I do not particularly understand the reason for use of the contrastive loss. Whilst it does appear to have performance benefits, that is not the primary purpose of this paper - the purpose and narrative revolves around set-based prompting instead. A like-for-like comparison should eschew any change in loss function.\\n6. The results on perplexity in Table 1 are also mystifying. I do not understand - and the authors attempt to provide no explanation for - the significant decrease in language modelling perplexity after their fine-tuning procedure for Llama-2-7B-Chat.\\nThe results demonstrated on the summarisation task with the use of set-based-finetuning are poor. Although, I do appreciate the inclusion of this result - and the thought behind this experiment.\\n7. Figure 2 and Figure 4 are very poorly presented in my opinion - it is not at all clear what the box plots and black dots refer to (although I was able to grok it eventually), and the use of \\u2018treatment\\u2019 and \\u2018control\\u2019 as terms is unnecessarily obtuse.\\n\\nI encourage the authors to focus more narrowly on the core message of the paper - finetuning with a set-based methodology - and expand on the direction around its use for tasks beyond MCQ. Fundamentally, however, an explanation must be provided for why even a standard baseline finetuning methodology results in such poor task performance.\", \"rating\": \"5\", \"confidence\": \"3\"}"
]
} |
3gze4cq9L1 | CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models | [
"Yuetai Li",
"Zhangchen Xu",
"Fengqing Jiang",
"Luyao Niu",
"Dinuka Sahabandu",
"Bhaskar Ramasubramanian",
"Radha Poovendran"
] | The remarkable performance of large language models (LLMs) in generation tasks has enabled practitioners to leverage publicly available models to power custom applications, such as chatbots and virtual assistants. However, the data used to train or fine-tune these LLMs is often undisclosed, allowing an attacker to compromise the data and inject backdoors into the models. In this paper, we develop a novel inference time defense, named CleanGen, to mitigate backdoor attacks for generation tasks in LLMs. CleanGen is a lightweight and effective decoding strategy that is compatible with the state-of-the-art (SOTA) LLMs. Our insight behind CleanGen is that compared to other LLMs, backdoored LLMs assign significantly higher probabilities to tokens representing the attacker-desired contents. These discrepancies in token probabilities enable CleanGen to identify suspicious tokens favored by the attacker and replace them with tokens generated by another LLM that is not compromised by the same attacker, thereby avoiding generation of attacker-desired content. We evaluate CleanGen against five SOTA backdoor attacks. Our results show that CleanGen achieves lower attack success rates (ASR) compared to five SOTA baseline defenses for all five backdoor attacks. Moreover, LLMs deploying CleanGen maintain helpfulness in their responses when serving benign user queries with minimal added computational overhead. | [
"Inference methods",
"interactive and collaborative generation"
] | Accept | https://openreview.net/pdf?id=3gze4cq9L1 | https://openreview.net/forum?id=3gze4cq9L1 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"sbx1Bean3F",
"XLgp7J5qaA",
"SAOyDNqpte",
"EgUnHdVqFr"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740409911051,
1740946282846,
1740164711400,
1741082606298
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission21/Reviewer_QqsG"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission21/Reviewer_ydbd"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission21/Reviewer_CABc"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Interesting backdoor detection method supported with clear insights and comprehensive results\", \"review\": \"The paper develops a method for detecting and mitigating backdoor attacks for generation tasks. In particular, the method leverages a reference model assumed to be separately fine-tuned on clean datasets (or poisoned using a different method) and builds upon the insight that targeted response tokens have significantly higher sampling probabilities compared with the target backdoored model (compared with the reference model). Comprehensive experiments support the superiority (effectiveness, helpfulness, efficiency) of the proposed method.\\nThe paper is well-written and well-structured. The key messages and the positioning of this paper within the existing literature can be easily understood by readers. The designed method is sensible, and the insights are clearly explained. The experiments are comprehensive, and the results suggest the proposed method's strong performance. \\n\\nNevertheless, I have a main question about the setup in which a \\\"clean\\\" model can be properly obtained (in addition to the backdoored target model). Are you using the same distribution of (benign) data samples to create the target backdoored model and to fine-tune the reference model? If this is the case, I'm not sure how realistic this setup is. In my opinion, the most realistic scenario is that either the model trainer only has access to a poisoned fine-tuning dataset, regardless of creating the target model or the reference model, or the model trainer resorts to some alternative easy-to-acquire \\\"clean\\\" dataset but likely comes from a different distribution. Providing and motivating the precise set of assumptions on the training setup of the reference model is recommended. In addition, the paper can discuss the potential adaptive backdoor strategies that may reduce the effectiveness of the proposed detection/defense methods.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"While the paper does propose an effective backdoor defense of backdoor generation addressing some of the concerns can enhance the paper.\", \"review\": \"Overview:\\n\\nThis paper presents a novel defense against backdoor attacks in generation tasks within NLP. The core idea is to use a reference model to compute a ratio between the probabilities assigned by the poisoned model and the reference model. Based on this ratio, a decision is made on whether to permit the token or not, thereby performing rejection sampling with the reference model.\\n\\nPositives\\n1. In terms of complexity, this is a viable solution. For instance, if paired with smaller and larger models, it can even outperform standard autoregressive generation, as the defense functions as a slight variation of speculative decoding.\\n2. The experiments provide sufficient evaluation of the proposed attacks.\\n3. The paper is well-written and easy to read.\\n\\nConcerns\\n1. Why is speculative decoding not viable as a defense given the presence of a reference model? In speculative decoding, when the reference model's probability is lower than the backdoor model's probability, the token is permitted with a probability of (reference/backdoor), which can be very small if the inverse of these probabilities (ratio in this paper) can be very large. While it is possible that, if the ratio is not extreme, some bad tokens may still be permitted, this brings me to the second point\\u2014generation may end up being upper-bounded by the utility of the reference model. Providing insights into this would be helpful.\\n\\n2. Another concern is the lack of analysis on whether the proposed generation approach is upper-bounded by the reference model's performance. Future versions of the paper could benefit from additional results addressing this. While I understand that if only backdoor-generated tokens show a massive difference in the ratio, most of the generation will still be guided by the backdoor model, thus preserving clean generation performance, a study into the distribution of the ratio between bad tokens and general tokens would be insightful.\\n\\n3. The method essentially relies on having a strong reference model, as it hinges on comparing the likelihood of a token in the reference model versus the backdoor model. While it is reasonable to assume that the likelihood of bad generations will differ across two different models, the possibility of the same backdoor attack affecting both models cannot be ignored. This is particularly relevant because backdoor attacks are often implemented through fine-tuning, and if the fine-tuning dataset is public, it could potentially be used to fine-tune both models. If a non-fine-tuned model is chosen as the reference model, it may negatively impact generation quality, as the response will be heavily influenced by the reference model's behavior.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Good paper, with a few objections\", \"review\": [\"Pros:\", \"This is a good paper. The defense seems highly practical and effective.\", \"The paper is well written, there are extensive evaluations, and the results look strong\", \"The graphics look nice.\"], \"questions\": [\"How does this defense work with transfer attacks? In this paper, it seems like you optimize the attacks against the target model. Clever attackers often optimize attacks against a large number of models to encourage transfer. If both the target and the defender model are susceptible to the attack, then your defense will likely fail.\", \"The inference cost of your defense is clearly high, since you need to evaluate using a secondary language model. Is there any way that you can be selective about soliciting feedback from the defender model? Does the defender model need to have any properties to guarantee robustness?\"], \"rating\": \"8\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
2nFgbiJTaa | DocImpact: Quantifying Document Impact in RAG-LLMs | [] | We present DocImpact, a novel methodology for measuring the influence of individual documents in Retrieval-Augmented Generation (RAG) systems. While RAG architectures have become increasingly popular in modern language models, understanding the precise contribution of each retrieved document to model outputs remains challenging. Our algorithm employs a counterfactual analysis by systematically excluding individual documents and measuring the divergence in model outputs compared to the full-context baseline. We implement our RAG-LLM using Pinecone as the database and Llama-3.1-70b as the LLM. | [
"RAG",
"Augmented Generation",
"Document Influence",
"LLM"
] | Reject | https://openreview.net/pdf?id=2nFgbiJTaa | https://openreview.net/forum?id=2nFgbiJTaa | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"pzn2OKuUU8",
"oZxziR4Ai6",
"cwTDTcSvHs"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740844513836,
1741100034962,
1740910088911
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission84/Reviewer_dvmW"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission84/Reviewer_m8Hw"
]
],
"structured_content_str": [
"{\"title\": \"Paper lacks theoretical and empirical validation of the proposed method.\", \"review\": [\"## Summary\", \"The paper introduces an `Importance Score` to measure the impact of generation of single documents in RAG systems. The authors claim their method enables empirical identifiability of documents guiding generation.\", \"## Strengths\", \"**Novelty**: the importance score as a difference of similarity metrics between generations with and without a given document seems to be a novel idea.\", \"**Clarity and Organization**: overall clear and concise explanation.\", \"**Impact and Relevance**: the paper is relevant to interesting and difficult applications such as source attribution\", \"## Weaknesses\", \"**Theoretical**; absence of theoretical guarantees showing the correctness or consistency of their ranking method.\", \"**Experimental**: the evaluation does not provide any baseline. It seems trivial that removing 3 out of 10 documents from RAG generation changes the output of the RAG-LLM (line 142). Two baselines could consist of removing documents at random or removing the longest ones. Evaluation is restricted to one dataset/task.\", \"**Reproducibility**: vague implementation details regarding their evaluation pipeline. While the models and datasets are readily available, a list with all the LLM queries is missing. Only one example (line 138) is provided.\", \"**Computational Cost**: the computational overhead of the method seems to be quite challenging at scale, especially in the case of more complicated similarity measures such as semantic entropy which require an inner LLM sampling loop.\", \"**Other**: It is not clear how to measure the impact of 2 documents with complementary information. While they can be both almost irrelevant when considered by themselves, 2 documents used together could have a higher impact than their importance score sums. Naively adapting the idea to search over the space of subsets of documents would require incredible computational costs.\", \"## Recommendation\", \"**Decision**: **Reject**\", \"**Key Reasons**: weak evaluation setup and lack of theory\", \"## Supporting Arguments\", \"While the general idea of measuring such a similarity score seems interesting, no theory regarding a notion of correctness was provided. Moreover, the paper does not clearly show that the method performs well empirically per the lack of baselines. Removing 3 documents at random may change the generation just as much depending on the task at hand.\", \"## Questions for the Authors\", \"1. How can one interpret the scale of the Importance Score? Can it be relevant in absolute terms or can it only be used for relative rankings within a prompt?\", \"2. For the experimental setup, why did you not use Semantic Entailment to detect changes in the RAG-LLM outputs?\", \"## Additional Feedback\", \"**Suggestions for Experimental Enhancements**: implement baselines and expand experimental setting to other datasets and RAG tasks. Use objective metrics for detecting changes in generation quality with and without the most influential documents.\", \"**Writing and Presentation Suggestions**: Schematics explaining the method and graphs showing its performance could be useful for visual impact and intuition.\"], \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Review\", \"review\": [\"### Summary\", \"This paper introduces DocImpact, a method for measuring the influence of individual documents in RAG systems. While RAG allows LLMs to integrate external knowledge during inference, the extent to which individual retrieved documents impact the final model output remains unclear. The paper proposes an IS metric that quantifies the contribution of each document by systematically excluding it from the retrieval set and measuring the divergence in the model's output.\", \"### Strengths\", \"The paper is well-organized for the most part.\", \"The proposed metric addresses a critical gap in RAG transparency.\", \"### Weaknesses\", \"The proposed method is evaluated on a synthetic invoice dataset with a small number of documents. How does the approach scale to larger document collections?\", \"The paper does not compare the proposed metric against existing explainability metrics in retrieval. How does IS compare to existing benchmarks for measuring retrieval impact?\", \"The work done so far is a promising step, but further exploration is needed to ensure completeness and robustness.\"], \"rating\": \"4\", \"confidence\": \"4\"}"
]
} |
2HyKWpAB4i | Steering Fine-Tuning Generalization with Targeted Concept Ablation | [
"Helena Casademunt",
"Caden Juang",
"Samuel Marks",
"Senthooran Rajamanoharan",
"Neel Nanda"
] | Models often learn unintended behaviors during fine-tuning, such as adopting spurious correlations present in training data. We present a novel technique for controlling what models learn during fine-tuning by identifying and ablating specific sparse autoencoder latents that represent undesired concepts. Our approach steers models toward intended generalizations even in cases where multiple policies correctly fit the training data. We evaluate our method on two tasks, significantly outperforming baselines: a gender bias task containing spurious correlations and a double multiple choice task where models must learn to focus on intended questions while ignoring others. On gender bias, our method completely eliminates spurious correlations, leading to strong performance out of distribution. In double multiple choice, it succeeds in 10 out of 16 scenarios. Our results mark an initial step toward using interpretability techniques to ensure the safe and reliable deployment of frontier AI systems. | [
"Interpretability",
"Mechanistic Interpretability",
"Fine-Tuning",
"Artificial Intelligence",
"AI",
"Sparse Autoencoders",
"Machine Learning"
] | Accept | https://openreview.net/pdf?id=2HyKWpAB4i | https://openreview.net/forum?id=2HyKWpAB4i | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"k2r46Ak2oR",
"Ym2xAs4hGz",
"FiHRXFY4op",
"Dc7cGMytS9"
],
"note_type": [
"official_review",
"official_review",
"decision",
"official_review"
],
"note_created": [
1740019570047,
1740741629892,
1740856397239,
1740854097810
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission123/Reviewer_ywqM"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission123/Reviewer_ETdE"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission123/Reviewer_bLyh"
]
],
"structured_content_str": [
"{\"title\": \"Novel point with potential practical use in real world\", \"review\": \"Strengths:\\nClear Motivation & Novel Angle\", \"the_authors_tackle_a_critical_but_underexplored_problem\": \"steering the generalization path of a model when multiple solutions exist. By focusing on latent directions associated with undesirable behavior, the paper contributes a unique perspective to the interpretability and controllability of LLMs.\\n\\nUse of Sparse Autoencoders\\nThe paper capitalizes on recent work in SAEs to decompose model activations into interpretable features. This approach stands out for attempting targeted concept removal, rather than globally retraining on curated data or implementing broad regularization methods.\\n\\nWell-Designed Synthetic Tasks\\nBoth the gender bias and double multiple-choice tasks are well-defined, enabling a clear demonstration of how a model might rely on spurious correlations. The authors isolate scenarios where the model\\u2019s OOD behavior indicates whether it truly learned the intended concept.\", \"weakness\": \"Scope of Tasks & Datasets\\nThe tasks, though illustrative, are relatively simple or synthetic. It remains unclear whether the proposed approach would scale effectively to complex real-world domains with more intertwined spurious correlations.\\n\\nReliance on SAE Interpretability\\nSparse autoencoders, while promising, can exhibit limitations: partial reconstruction errors, unaligned latent spaces, and potential mismatch between a \\u201chuman concept\\u201d and a single latent direction. The paper\\u2019s approach hinges on identifying the correct latents to ablate, but the risk remains that some relevant features might be missed or incorrectly labeled.\", \"rating\": \"7\", \"confidence\": \"3\"}",
"{\"title\": \"SAE-Guided Ablation for Controlled Language Model Generalization\", \"review\": [\"**Summary**\", \"This paper presents a novel method that leverages SAEs to steer the generalization behavior of language models during fine-tuning. By identifying and ablating latent features associated with unintended generalizations (such as gender bias and task misalignment in double multiple choice scenarios), the authors aim to guide the model toward the intended behavior. Experiments on a gender bias task and a double multiple choice task show that the method can significantly improve out-of-distribution performance relative to baselines, including random ablations and interventions applied only at test time.\", \"**Strengths**\", \"**Innovative Methodology**: Combines SAEs with targeted ablation to steer model behavior, offering a novel solution to generalization control.\", \"**Empirical Validation**: Results on toy tasks\\u2014specifically in mitigating spurious correlations in gender bias and improving focus in double multiple choice tasks\\u2014demonstrate the potential of the proposed approach.\", \"**Strong Results**: Demonstrates near-complete elimination of gender bias and significant improvements in 12/16 double-choice scenarios.\", \"**Weaknesses**\", \"Computational Overhead: The reliance on SAEs and the associated markup for interpreting and ablating features introduces additional compute requirements. The paper does not sufficiently quantify this overhead or discuss its impact on scalability.\", \"Scalability Concerns: Experiments are limited to a 2B-parameter model and synthetic tasks; applicability to larger models or real-world scenarios is unclear.\", \"Limited Baselines: There is a noticeable absence of comparisons with alternative methods that do not rely on SAE-based approaches. Such comparisons could clarify whether the performance gains are specific to SAE techniques or are achievable through other means.\", \"**Questions**\", \"How much does the SAE-based pipeline increase training time compared to standard fine-tuning? Quantifying this would clarify practicality.\", \"Have you explored alternative methods that might offer similar benefits? A discussion and comparisson on possible non-SAE alternatives would enhance the paper.\", \"In instances where ablation did not improve performance, could you provide further diagnostics or insights to help understand these shortcomings?\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Great potential for expansion into a longer paper.\", \"review\": [\"## Summary\", \"The paper provides a novel technique to steer fine-tuning by ablating unwanted latent 'concepts' using Sparse AutoEnconders. The proposed method performs well on evaluation experiments outperforming a random baseline.\", \"## Strengths\", \"**Motivation**: the method is well-motivated and relevant to the field at large\", \"**Empirical Evaluation**: the evaluation setup is clear and concise, showing very good and promising results over baseline models and random latent ablations\", \"**Clarity and Organization**: the paper is overall quite clear and reads well\", \"**Impact and Relevance**: the method is clearly framed within the relevant literature and addresses pressing issues of LLMs like gender biases\", \"## Weaknesses\", \"**Clarity**: the authors explanation of how they automatically interpret the top 100 activating latents is not too clear to the reader.\", \"## Recommendation\", \"**Decision**: **Accept**\", \"**Key Reasons**: strong evaluation results, novel methodology, highly relevant to the field\", \"## Additional Feedback\", \"**Writing and Presentation Suggestions**: A slightly longer explanation in Appendix B that does not require the reader to read other papers would be quite useful.\"], \"rating\": \"7\", \"confidence\": \"3\"}"
]
} |
2AVAoVV8u7 | Disentangling Sequence Memorization and General Capability in Large Language Models | [
"Gaurav Rohit Ghosal",
"Pratyush Maini",
"Aditi Raghunathan"
] | Verbatim memorization in large language models remains a persistent and unsolved challenge, raising critical concerns for privacy, copyright, and responsible deployment. Existing research suggests that effective unlearning requires targeting the specific neurons responsible for memorization, as broad model updates fail to erase content reliably. However, we show that even these approaches rest on a flawed premise. Through controlled experiments, we demonstrate that memorized sequences are not naturally isolated to specific neurons during training, except in cases where the sequences are highly atypical. In this work, we put forward a new training paradigm that attempts to \textbf{isolate memorization to specific neurons by design}. The core challenge is that gradients from the repeated sequences entangle both ``generalizing'' features that improve general capability, in addition to sequence-specific memorization. We show that a simple change to standard training can implicitly disentangle these by leveraging metadata that identifies repeated sequences. We verify the efficacy of our method (\seqtd) in a proof-of-concept natural language setting and unveil the mechanism by which this disentanglement is possible through the training dynamics of memorization. We conclude by discussing the practical considerations of the deployment of \seqtd and highlight potential avenues for incorporating it into large-scale settings. | [
"Memorization",
"Unlearning",
"Localization"
] | Accept | https://openreview.net/pdf?id=2AVAoVV8u7 | https://openreview.net/forum?id=2AVAoVV8u7 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"xAFXryc3za",
"UFdtg4W3WY",
"77Fa8PRE21"
],
"note_type": [
"official_review",
"decision",
"official_review"
],
"note_created": [
1740881839690,
1741100004698,
1740894445663
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission96/Reviewer_WUgz"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission96/Reviewer_3ERg"
]
],
"structured_content_str": [
"{\"title\": \"Interesting approach to solving an important problem\", \"review\": \"Strengths:\\n\\nThe paper proposes a novel training-time strategy (SeqTD) to segregate memorization signals without severely damaging general performance. It provides interesting insights and a plausible mechanism\\u2014fewer interfering gradients letting memorization \\u201clive\\u201d in specialized neurons.\", \"concerns\": \"The entire framework relies on accurate repeated-text identification and consistent assignment of ID-based dropout masks, which may be difficult to implement at scale.\\n\\nEmpirical results come from relatively small GPT-medium\\u2013style models and a limited dataset (TinyStories). The generalization to standard large-language-model pipelines is not fully shown.\\n\\nNo direct demonstration that the approach seamlessly extends to partial, paraphrased, or incomplete repetitions.\\n\\nDespite these reservations, the paper provides a substantive contribution. It clarifies why naive post-hoc pruning or gradient-based localization can fail, and demonstrates how harnessing dropout plus consistent repeated-sequence \\u201cmasking\\u201d can isolate memorized text. It is a step toward making large-scale language models more \\u201cunlearnable\\u201d without major hits to performance.\", \"recommendation\": \"I find the central idea of \\u201csequence-tied dropout\\u201d compelling and original. I would encourage the authors to further investigate (1) how well the method scales on large, noisy corpora, (2) whether memorization might still pop up in attention heads or in partial overlap of memorization neurons, and (3) more robust ways to label repeated data in real-world text.\\n\\nOverall, this submission is promising and offers a meaningful new perspective on controlling memorization at training time. I think it would b a good addition to the workshop.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}",
"{\"title\": \"Anonymous Review\", \"review\": \"This paper discusses the localization of memorized content in parameter space. The paper states that existing localization methods struggle when memorized sequences are linguistically similar to the broader training distribution. Towards this, the paper introduces Sequence-Tied Dropout (SeqTD), which partitions MLP-layer neurons in transformers into shared neurons, updated by all examples, and memorization neurons, activated consistently by repeated sequences.\\n\\nBy enforcing dropout on all but a fixed subset of memorization neurons for repeated text, SeqTD isolates memorization while preventing reinforcement in shared neurons. This design leverages learning-and-forgetting cycles to maintain stable long-term storage in memorization neurons while allowing shared neurons to capture general linguistic patterns. SeqTD enables partial parameter sharing, preserving contributions from repeated text. SeqTD leverages learning dynamics to promote disentanglement and empirically validates its effectiveness in isolating memorization without compromising generalization.\", \"rating\": \"7\", \"confidence\": \"4\"}"
]
} |
26ORXfaJ56 | Automated Feature Labeling with Token-Space Gradient Descent | [
"Julian Schulz",
"Seamus Fallows"
] | We present a novel approach to feature labeling using gradient descent in token-space. While existing methods typically use language models to generate hypotheses about feature meanings, our method directly optimizes label representations by using a language model as a discriminator to predict feature activations. We formulate this as a multi-objective optimization problem in token-space, balancing prediction accuracy, entropy minimization, and linguistic naturalness. Our proof-of-concept experiments demonstrate successful convergence to interpretable single-token labels across diverse domains, including features for detecting animals, mammals, Chinese text, and numbers. While our current implementation is constrained to single-token labels and relatively simple features, the results suggest that token-space gradient descent could become a valuable addition to the interpretability researcher's toolkit. | [
"Mechanistic Interpretability",
"Feature Labeling",
"Large Language Models",
"Automated Labeling",
"Feature Analysis",
"Sparse Autoencoders"
] | Accept | https://openreview.net/pdf?id=26ORXfaJ56 | https://openreview.net/forum?id=26ORXfaJ56 | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"z3HKv0Iqqq",
"mGnzcDhSNG",
"CaFi3RFLTH"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1740966797602,
1740339237153,
1741084500871
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission47/Reviewer_Wgvm"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission47/Reviewer_6LmN"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"An interesting idea, but results are primarily a couple of hand-picked examples\", \"review\": [\"### Summary:\", \"The paper presents an approach to feature labeling which uses gradient descent on tokens to find the token that most closely matches the given feature.\", \"### Strengths:\", \"I am not familiar with the literature, but it seems like an interesting idea.\", \"I found the paper difficult to understand the first time reading through. It would benefit from a clear figure of the training loop and which parts are being optimized and iterated over, etc. Pseudocode could also be helpful.\", \"### Weaknesses:\", \"Ebrahimi et al. (2017) is mentioned as similar in the related work. How does the work relate to/differ from this?\", \"Is the feature that is being labelled a feature in an actual model? If so, what model? Or is it essentially being simulated in a rule-based way? This should be made clearer.\", \"It would be helpful to include an ablation of the different loss terms.\", \"Would benefit greatly from a meaningful metric of the performance rather than a couple of handpicked examples.\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"title\": \"Well-Scoped Proof of Concept\", \"review\": [\"This work is well-scoped and has a clear understanding of the strengths and weaknesses of its method. I think that it is valuable to prove that differentiable methods can effectively succeed at feature labeling, even though there are obvious computational problems that make scaling this method incredibly difficult. The paper is well-written, and the label representation choices are smart and effective. I have a couple of thoughts for potential ways to improve the clarity of the results:\", \"It would be useful to do a more direct contrast between the paper results and the effectiveness of prompting LLMs to provide feature explanations. For example, for the datasets you use, what is the distribution of LLM naive responses and how would it categorize features? It is worthwhile to have a baseline to show how much more effective your feature classification method is, especially in terms of confidence of classification. This feels like a fairly straightforward experiment to run.\", \"I am interested in seeing how the hyperparameter for the entropy loss informs the outcomes of your model predictions. I could imagine that having a large weighting factor toward collapsing the probability distribution over token space could incentivize the model to develop a high probability for an answer, even when that answer is incorrect. What happens in the failed cases when this parameter is smaller?\", \"I think the language around \\\"candidate descriptions\\\" is slightly confusing. This phrasing, alongside the labels being fixed in the legends of all training graphs, makes it seem like you are optimizing over a fixed number of potential labels (that might be hand-picked), instead of all potential labels in the token space. I think that this could be made less ambiguous.\", \"A small note: there seems to be a typo in Figure 1.\"], \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
1X6TeUoaLo | An Afrocentric Perspective on Algorithm Watermarking of AI-generated Content. | [] | Digital-driven misinformation, counterfeiting, and copyright violations have become a growing concern in Africa. The prevalence of Artificial intelligence content (AIGC) has the potential to widen its impact and create more challenges for the people on the continent. AIGC poses a dual challenge. First, creatives who have worked so hard to create a masterpiece see their work being illegally duplicated or used without their consent. The other unsuspecting individuals have fallen prey to misinformation caused by AIGC. The reason, amongst many, could be the regulatory gaps in the law governing data protection, copyright and even artificial intelligence. This paper argues that curating technical watermarking methodologies/techniques is insufficient, considering the uniqueness of the African continent. It further addresses the regulatory gaps by examining the existing laws and proposing an Afrocentric perspective on AIGC using Nigeria, Kenya, Egypt and South Africa as case studies. | [
"Watermark",
"trust",
"fairness"
] | Reject | https://openreview.net/pdf?id=1X6TeUoaLo | https://openreview.net/forum?id=1X6TeUoaLo | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"lUbtG6BIEd",
"MQXjcL9hkV",
"2wCx69qThP"
],
"note_type": [
"official_review",
"official_review",
"decision"
],
"note_created": [
1741030318051,
1740467719410,
1741099492393
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission145/Reviewer_NMF5"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission145/Reviewer_DLEc"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"A Nice Primer on Afrocentric Legality and Regulations regarding Watermarks\", \"review\": \"The authors discuss generative watermarking through an Afrocentric lens. Regulations and existing laws pertinent to watermarking in Nigeria, Kenya, Egypt, and South Africa are reviewed, with practical guidance provided. I have a few suggestions for the authors:\\n\\n1. Although watermarking spans any modality, as the theme of this workshop is language models, please consider directed language towards textual watermarking. Namely, some review of algorithms such as Edelman, Francati et al., \\\"Watermarks in the Sand [...]\\\" (2024), Kirchenbauer et al., \\\"A Watermark for Large Language Models\\\" (2023), etc., and how they might be governed by the cited laws and regulations is helpful to analyze how realistically applicable these algorithms are. Indeed, watermarking researchers often \\\"invent\\\" the scenarios under which watermarking may be used, and assessing practicality through a legal lens would be helpful for this community. \\n\\n2. Citations are incorrectly formatted. They appear after sentences. The references section needs to be fixed up as well, as certain references are running past the page.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"The paper introduces some related works of watermarking technology in AIGC, and give a summary of legal and regulatory for the watermarking technology in Africa.\", \"review\": \"Strengths:\\n1. The paper addresses an important issue: the detection and identification of false content generated by generative AI.\\n2. The paper provides extensive content on the legal and regulatory aspects of AIGC (AI-generated content) and digital watermarking technology in Africa.\", \"weaknesses\": \"1. This paper reads more like a legal and regulatory article rather than a computer science paper. I am not entirely sure if this aligns with the acceptance criteria for the ICLR workshop.\\n2. There are some obvious typographical errors in the paper, such as the single quotation marks around 'copyright infringement' in Section 4.1 and the double quotation marks around \\\"... using, selling or distributing. . . \\\" in Section 4.3.\", \"suggestions\": \"1. Could the authors consider adding some experiments to make this paper more relevant to the field of \\\"data science\\\"?\\n2. The authors may benefit from learning more about LaTeX typesetting on the website:\\u00a0https://www.overleaf.com/learn.\", \"rating\": \"3\", \"confidence\": \"5\"}",
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
0wMUdbvYLL | Scalable Fingerprinting of Large Language Models | [
"Anshul Nasery",
"Jonathan Hayase",
"Creston Brooks",
"Peiyao Sheng",
"Himanshu Tyagi",
"Pramod Viswanath",
"Sewoong Oh"
] | Model fingerprinting has emerged as a powerful tool for model owners to identify their shared model given API access. However, to lower false discovery rate, fight fingerprint leakage, and defend against coalitions of model users attempting to bypass detection, we argue that scaling up the number of fingerprints one can embed into a model is critical. Hence, we pose Scalability as a crucial requirement for good fingerprinting schemes.
We experiment with fingerprint design at larger scales than previously considered,
and propose a new method, dubbed Perinucleus sampling, to generate scalable, persistent, and harmless fingerprints. We demonstrate that this scheme can add 24,576 fingerprints to a Llama-3.1-8B model --- two orders of magnitude more than existing schemes --- without degrading the model's utility. Our inserted fingerprints persist even after supervised fine-tuning on other data. We further describe security risks for fingerprinting, and theoretically and empirically show how a scalable fingerprinting scheme like ours can help mitigate these risks. | [
"Fingerprinting"
] | Accept | https://openreview.net/pdf?id=0wMUdbvYLL | https://openreview.net/forum?id=0wMUdbvYLL | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"qFvaLgD20C",
"V85JEWIA5v",
"JSPpBcuEpf",
"7RoUFxfBu4"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740017129451,
1740839204849,
1740904344081,
1741082797256
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission22/Reviewer_diBH"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission22/Reviewer_AYJG"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission22/Reviewer_LWPD"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"Review\", \"review\": \"This paper proposes a novel scheme for generating and inserting fingerprints in LLMs. Employing the Perinucleus sampling method for fingerprint generation and leveraging model averaging and data - mixing regularization techniques for training, it notably boosts the number of insertable fingerprints. The scheme has been verified to ensure key fingerprint properties like harmlessness, persistence, and scalability.\\n\\n**Questions:**\\n\\n1. The experiments are solely based on the Llama-series Base models (1B, 3B, 8B). How effective is the method on models with diverse architectures and scales?\\n2. The information about the SFT experimental settings is not clearly described. Is training for only 2 epochs on the alpaca dataset sufficient?\\n3. How effective is the proposed method when fully fine - tuned in a downstream specific domain?\\n4. The paper claims that more fingerprints are better. However, how does the model's computational complexity change during inference as the number of inserted fingerprints increases?\\n5. What impact will the proposed method face when the fingerprinted model undergoes compression or quantization operations?\", \"rating\": \"5\", \"confidence\": \"3\"}",
"{\"title\": \"Review of Scalable Fingerprinting of Large Language Models\", \"review\": \"### **Summary**\\nThis paper focuses on the **scalability** of model fingerprinting, introducing **Perinucleus Sampling**, a novel method that allows embedding a significantly higher number of fingerprints compared to prior approaches without degrading model performance. The study also highlights security concerns such as collusion attacks, discussing how scalability can help mitigate these threats. \\n\\n### **Strengths** \\nThe primary contribution of this work is its ability to insert a **substantially larger number of fingerprints** without a major drop in model utility, addressing a critical limitation of prior fingerprinting schemes. The authors conduct **sufficient comparisons** with existing methods to illustrate that increasing the number of fingerprints does not significantly degrade performance. Additionally, they show that their approach **achieves higher persistence** when the model is fine-tuned on a different dataset, which is a crucial property for real-world deployment. \\n\\n### **Weaknesses** \\nThe **main limitation** of the work is the **lack of comprehensive empirical evaluations on the security aspects** of scalability. While the **theoretical discussion is strong**, there is **insufficient empirical validation**. Moreover, Appendix E lacks direct comparisons with baseline methods under adversarial conditions, making it unclear how the proposed method fares against existing methods under similar adversarial settings. Incorporating more empirical results under different attack scenarios and comparing against other fingerprinting techniques in these settings would greatly strengthen the security claims of the paper. \\n\\nOverall, while the paper makes a notable contribution to scalable fingerprinting with strong theoretical grounding and empirical validation on persistence and harmlessness.\", \"rating\": \"6\", \"confidence\": \"4\"}",
"{\"title\": \"Interesting work on LLM fingerprint scalability\", \"review\": \"This paper focuses on improving the scalability of model fingerprinting for large language models (LLMs). The paper proposes a new method, Perinucleus sampling, which aims to embed a large number of fingerprints into LLMs without significantly degrading their utility.\", \"pros\": [\"The paper argues that scalability of fingerprints is critical, because proving ownership via fingerprints involves revealing the fingerprint used. This is a novel perspective that has been rarely discussed. In Section 5 the paper exemplifies how scalability is necessary for security with detailed analyses.\", \"The paper demonstrates that even with a large number of fingerprints, the degradation in model performance is minimal. This balance between uniqueness and harmlessness is crucial for practical deployment.\"], \"cons\": [\"Since the method uses a large number of fingerprints, how does the fingerprint training time scale? Does it require significantly more training time compared to other methods? The paper has not addressed this issue.\"], \"rating\": \"8\", \"confidence\": \"5\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
09tnQgqKuZ | ToolScan: A Benchmark For Characterizing Errors In Tool-Use LLMs | [
"Shirley Kokane",
"Ming Zhu",
"Tulika Manoj Awalgaonkar",
"Jianguo Zhang",
"Akshara Prabhakar",
"Thai Quoc Hoang",
"Zuxin Liu",
"Rithesh R N",
"Liangwei Yang",
"Weiran Yao",
"Juntao Tan",
"Zhiwei Liu",
"Huan Wang",
"Juan Carlos Niebles",
"Shelby Heinecke",
"Caiming Xiong",
"Silvio Savarese"
] | Evaluating Large Language Models (LLMs) is one of the most critical aspects of building a performant compound AI system. Since the output from LLMs propagate to downstream steps, identifying LLM errors is crucial to system performance. A common task for LLMs in AI systems is tool use. While there are several benchmark environments for evaluating LLMs on this task, they typically only give a success rate without any explanation of the failure cases. To solve this problem, we introduce ToolScan, a new benchmark to identify error patterns in LLM output on tool-use tasks. Our benchmark data set comprises of queries from diverse environments that can be used to test for the presence of seven newly characterized error patterns. Using ToolScan, we show that even the most prominent LLMs exhibit these error patterns in their outputs. Researchers can use these insights from ToolScan to guide their error mitigation strategies. We open-source our evaluation framework at https://anonymous.4open.science/r/ToolScan-1474 . | [
"LLM",
"Function Calling",
"Benchmark",
"Error Detection",
"Evaluation"
] | Accept | https://openreview.net/pdf?id=09tnQgqKuZ | https://openreview.net/forum?id=09tnQgqKuZ | ICLR.cc/2025/Workshop/BuildingTrust | 2025 | {
"note_id": [
"zOd6Zivcx6",
"yVcqjGj1PW",
"shDLuvF5Zw",
"T5swengt2G"
],
"note_type": [
"official_review",
"official_review",
"official_review",
"decision"
],
"note_created": [
1740234554787,
1740856579107,
1740880654817,
1741083984856
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission39/Reviewer_VsNY"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission39/Reviewer_V8aV"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Submission39/Reviewer_SPqX"
],
[
"ICLR.cc/2025/Workshop/BuildingTrust/Program_Chairs"
]
],
"structured_content_str": [
"{\"title\": \"A useful benchmark\", \"review\": \"This paper presents a new benchmark for fine-grained evaluation of the LLM-with-tools framework.\", \"strengths\": \"The paper is well-written and clear, the benchmark is useful, and analysing tool-calling performance by error type allows more informative evaluations. The authors test multiple, new and SoTA LLMs.\", \"weaknesses\": \"The evaluated LLMs are all base models prompted in context. It would be interesting to compare these with fine-tuned tool-augmented LLMs (e.g. Toolformer, ToolkenGPT etc.) and examine how the error patterns may change.\", \"rating\": \"7\", \"confidence\": \"4\"}",
"{\"title\": \"Review\", \"review\": \"This paper provides an important benchmark for studying tool-use. Rather than just report correctness rates or progress rates in multi-step tool-use, this paper introduces a well-designed benchmark with very fine-grained metrics for evaluating tool-use. One con might be that it is automatically constructed, but I think automatic construction is reasonable for a tool-use benchmark and is not automatically a con.\", \"rating\": \"8\", \"confidence\": \"4\"}",
"{\"title\": \"ToolScan Review\", \"review\": [\"## Contributions\", \"The authors introduce a benchmark that augments existing agentic-flow API-call benchmarks. They formalize the common reasons that agents fail on these cases, and present a framework that categorizes the trajectories into the respective failure modes. They present results from ablation studies on many SOTA models.\", \"## Strengths\", \"Augmented datasets with rephrasing + added confusion/complexity, since LLMs need to be able to function even when the instructions are not perfectly clear\", \"Useful to break down the failure cases into more granular failure reasons for interpretability\", \"Interesting ablation study results reported\", \"## Weaknesses\", \"Some minor typos (weather and movie)\", \"The entire dataset, across so many fields, only has 150 cases?\", \"The mathematical description of LLM agents is likely not necessary, since many papers in this space have already formalized their mathematical underpinnings\", \"Does not include any Anthropic models\", \"## Questions\", \"How are you determining which of the seven failure cases each trajectory reaches? A longer description of the methodology of that component of your framework would be helpful.\"], \"rating\": \"6\", \"confidence\": \"3\"}",
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
w63aCqNRFp | Possibility for Proactive Anomaly Detection | [
"Jinsung Jeon",
"Jaehyeon Park",
"Sewon Park",
"Jeongwhan Choi",
"Minjung Kim",
"Noseong Park"
] | Time-series anomaly detection, which detects errors and failures in a workflow, is one of the most important topics in real-world applications. The purpose of time-series anomaly detection is to reduce potential damages or losses. However, existing anomaly detection models detect anomalies through the error between the model output and the ground truth (observed) value, which makes them impractical. In this work, we present a $\textit{proactive}$ approach for time-series anomaly detection based on a time-series forecasting model specialized for anomaly detection and a data-driven anomaly detection model. Our proactive approach establishes an anomaly threshold from training data with a data-driven anomaly detection model, and anomalies are subsequently detected by identifying predicted values that exceed the anomaly threshold. In addition, we extensively evaluated the model using four anomaly detection benchmarks and analyzed both predictable and unpredictable anomalies. We attached the source code as supplementary material. | [
"Time-series Anomaly Detection",
"Proactive Approach",
"Time-series Forecasting"
] | Accept | https://openreview.net/pdf?id=w63aCqNRFp | https://openreview.net/forum?id=w63aCqNRFp | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"PI5t8xfyOh"
],
"note_type": [
"decision"
],
"note_created": [
1741192524246
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
vwDshzzBrl | Challenges of Decomposing Tools in Surgical Scenes Through Disentangling The Latent Representations | [
"Sai Lokesh Gorantla",
"Raviteja Sista",
"Apoorva Srivastava",
"Utpal De",
"Partha Pratim Chakrabarti",
"Debdoot Sheet"
] | Image generation through disentangling object representations is a critical area of research with significant potential. Disentanglement involves separating the representation of objects and their attributes, enabling greater control over the generated output. However, existing approaches are limited to disentangling only the objects’ attributes and generating images with selected combinations of attributes. This study explores learning object-level disentanglement of semantically rich latent representation using von-Mises-Fisher (vMF) distributions. The proposed approach aims to disentangle compressed representations into object and background classes. The approach is tested on surgical scenes for disentanglement of tools and background information using the Cholec80 dataset. Achieving tool-background disentanglement provides an opportunity to generate rare and custom surgical scenes. However, the proposed method learns to disentangle representations based on pixel intensities. This study uncovers the challenges and shortfalls in achieving object-level disentanglement of the compressed representations using vMF distributions. The code for this study is available at https://github.com/it-is-lokesh/vMF-disentanglement-challenges. | [
"Compressed representations",
"Disentanglement",
"Feature decomposition",
"Surgical scenes",
"vMF kernels"
] | Accept | https://openreview.net/pdf?id=vwDshzzBrl | https://openreview.net/forum?id=vwDshzzBrl | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"xb4YdjoEiL"
],
"note_type": [
"decision"
],
"note_created": [
1741192488984
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
uVh0Ac45HT | ADDRESSING FINANCIAL MARKET UNCERTAINTY: EXTREMA PREDICTION WITH SEQUENCE MODELS | [] | Deep learning models, particularly sequence-based architectures, are widely used for trend prediction and time series analysis in financial markets. This paper investigates a fundamental aspect of chart pattern formations, the prediction of local extrema types. By classifying extrema into four distinct categories using historical extrema data, we aim to provide a novel perspective on chart pattern identification. However, our findings reveal that these models struggle to generalize under data distribution shifts, achieving significantly lower prediction accuracy on out-of-training data. These results underscore the limitations of deep learning-based strategies in dynamic financial environments and highlight the need for robust methods to address market variability. | [
"Extrema Prediction",
"Sequence Models",
"Data Shift"
] | Reject | https://openreview.net/pdf?id=uVh0Ac45HT | https://openreview.net/forum?id=uVh0Ac45HT | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"wGECXNTIKB"
],
"note_type": [
"decision"
],
"note_created": [
1741192449274
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
tgYt1EmX7u | On the Privacy Risks of Spiking Neural Networks: A Membership Inference Analysis | [
"Junyi Guan",
"Abhijith Sharma",
"Chong Tian",
"Salem Lahlou"
] | Spiking Neural Networks (SNNs) are increasingly explored for their energy efficiency and robustness in real-world applications, yet their privacy risks remain largely unexamined. In this work, we investigate the susceptibility of SNNs to Membership Inference Attacks (MIAs)—a major privacy threat where an adversary attempts to determine whether a given sample was part of the training dataset. While prior work suggests that SNNs may offer inherent robustness due to their discrete, event-driven nature, we find that its resilience diminishes as latency (T) increases. Furthermore, we introduce an input dropout strategy under black box setting, that significantly enhances membership inference in SNNs. Our findings challenge the assumption that SNNs are inherently more secure, and even though they are expected to be better, our results reveal that SNNs exhibit privacy vulnerabilities that are equally comparable to ANNs. | [
"Spiking Neural Networks;Privacy;Trustworthiness"
] | Accept | https://openreview.net/pdf?id=tgYt1EmX7u | https://openreview.net/forum?id=tgYt1EmX7u | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"AFF8m3S5Zu"
],
"note_type": [
"decision"
],
"note_created": [
1741192554373
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
sR8LNsb86a | When RNN-based Marked Point Processes Fail in Real-World Finance: A Tiny Paper | [] | Neural Marked Temporal Point Process (MTPP) models have shown promise in controlled benchmarks for forecasting and event pattern modeling in finance. However, when deploying Recurrent Neural Network (RNN)-based MTPPs on large-scale, high-dimensional financial event streams, we encountered unexpected challenges: ballooning parameter sizes, increased computational costs, and training instability. This short paper outlines (1) the financial use case, (2) the literature-proposed neural MTPP solution, (3) the negative outcomes observed, and (4) our investigation into why standard MTPPs fail to generalize as promised in real-world conditions. | [
"recurrent neural networks",
"time series",
"finance"
] | Reject | https://openreview.net/pdf?id=sR8LNsb86a | https://openreview.net/forum?id=sR8LNsb86a | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"rElVZYeNKH"
],
"note_type": [
"decision"
],
"note_created": [
1741192711535
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
pCm5JPFSGE | On the Limits of Applying Graph Transformers for Brain Connectome Classification | [
"Jose Miguel Lara Rangel",
"Clare Elizabeth Heinbaugh"
] | Brain connectomes offer detailed maps of neural connections within the brain. Recent studies have proposed novel connectome graph datasets and attempted to improve connectome classification by using graph deep learning. With recent advances demonstrating transformers’ ability to model intricate relationships and outperform in various domains, this work explores their performance on the novel NeuroGraph benchmark datasets and synthetic variants derived from probabilistically removing edges to simulate noisy data. Our findings suggest that graph transformers offer no major advantage over traditional GNNs on this dataset. Furthermore, both traditional and transformer GNN models maintain accuracy even with all edges removed, suggesting that the dataset’s graph structures may not significantly impact predictions. We propose further assessing NeuroGraph as a brain connectome benchmark, emphasizing the need for well-curated datasets and improved preprocessing strategies to obtain meaningful edge connections. | [
"Graph Deep Learning",
"Graph Transformers",
"Connectomes",
"Synthetic Dataset",
"Static Connectome Classification"
] | Accept | https://openreview.net/pdf?id=pCm5JPFSGE | https://openreview.net/forum?id=pCm5JPFSGE | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"K0jdi3DLtX"
],
"note_type": [
"decision"
],
"note_created": [
1741192511264
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ojRs6YSx9m | Do Not Overestimate Black-box Attacks | [
"Han Wu",
"Sareh Rowlands",
"Johan Wahlstrom"
] | As cloud computing becomes pervasive, deep learning models are deployed on cloud servers and then provided as APIs to end users. However, black-box adversarial attacks can fool image classification models without access to model structure and weights. Recent studies have reported attack success rates of over 95\% with fewer than 1,000 queries. Then the question arises: whether black-box attacks have become a real threat against cloud APIs? To shed some light on this, our research indicates that black-box attacks are not as effective against cloud APIs as proposed in research papers due to several common mistakes that overestimate the efficiency of black-box attacks. To avoid similar mistakes, we conduct black-box attacks directly on cloud APIs rather than local models. | [
"Adversarial Attacks",
"Black-box Attacks",
"Image Classification",
"Cloud Service"
] | Accept | https://openreview.net/pdf?id=ojRs6YSx9m | https://openreview.net/forum?id=ojRs6YSx9m | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"5YBrAeVry4"
],
"note_type": [
"decision"
],
"note_created": [
1741192504166
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
o3yNQVeRy1 | Preserving Product Fidelity in Large Scale Image Recontextualization with Diffusion Models | [] | We present a framework for high-fidelity product image recontextualization using text-to-image diffusion models and a novel data augmentation pipeline. This pipeline leverages image-to-video diffusion, in/outpainting, and counterfactual generation to create synthetic training data, addressing limitations of real-world data collection for this task. Our method improves the quality and diversity of generated images by disentangling product representations and enhancing the model's understanding of product characteristics. Evaluation on the ABO dataset and a private product dataset, using automated metrics and human assessment, demonstrates the effectiveness of our framework in generating realistic and compelling product visualizations, with implications for diverse applications such as e-commerce and virtual product showcasing. | [
"Diffusion Models",
"Product Recontextualization",
"Object Personalization",
"Synthetic Data Augmentation",
"Novel View Synthesis",
"E-commerce"
] | Reject | https://openreview.net/pdf?id=o3yNQVeRy1 | https://openreview.net/forum?id=o3yNQVeRy1 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"iPLNbrCaRd"
],
"note_type": [
"decision"
],
"note_created": [
1741192635419
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
nAHrEpNqFS | Why are small transformers not better? | [] | The attention mechanism is powering rapid
progress in terms of large-scale generative
AI. It is conversely exceedingly difficult
to find small-scale applications
for which attention-based models outperform
traditional approaches, such as multi-layer
perceptrons or recurrent networks. We examine
this problem in the context of `task switching'.
In this framework models work on ongoing token
sequences with the current task being determined
by stochastically interseeded control tokens.
We show that standard transformers cannot solve
a basic reference model, IARC, which is based on
finite-domain arithmetics. The model contains
a trivial unary operation, (I: increment the
current input), a likewise trivial binary operation,
(A: add last two inputs), and reverse
copy, (R), a standard memory task. A fourth control
token, (C), adds recursive context dependency by
modifying current tasks. Tasks are maintained as
long as no new control tokens appears in the
prompt, which happens stochastially every 3-9
steps. We show that transformers, LSTM
recurrent networks and plain MLPs of similar
sizes ($\sim$1.5M parameters) achieve only
modest prediction accuracies, of about 45\%.
As a countertest we trained transformers containing
a modified attention mechanism, expressive attention,
finding performance levels of around 95\%.
Our results indicate that the workings of
attention can be understood better, and even
improved, when comparing qualitatively
different formulations is a task-switching
setting. | [
"transformer",
"attention",
"task-switching"
] | Reject | https://openreview.net/pdf?id=nAHrEpNqFS | https://openreview.net/forum?id=nAHrEpNqFS | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"ifhngD2gDE"
],
"note_type": [
"decision"
],
"note_created": [
1741192584563
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
mbHG1TPWq9 | Modeling speech emotion with label variance and analyzing performance across speakers and unseen acoustic conditions | [
"Vikramjit Mitra",
"Amrit Romana",
"Dung Tran",
"Erdrin Azemi"
] | Spontaneous speech emotion data usually contain perceptual grades where graders assign emotion score after listening to the speech files. Such perceptual grades introduce uncertainty in labels due to grader opinion variation. Grader variation is addressed by using consensus grades as groundtruth, where the emotion with the highest vote is selected. Consensus grades fail to consider ambiguous instances where a speech sample may contain multiple emotions, as captured through grader opinion uncertainty. We demonstrate that using the probability density function of the emotion grades as targets instead of the commonly used consensus grades, provide better performance on benchmark evaluation sets compared to results reported in the literature. We show that a saliency driven foundation model (FM) representation selection helps to train a state-of-the-art speech emotion model for both dimensional and categorical emotion recognition. Comparing representations obtained from different FMs, we observed that focusing on overall test-set performance can be deceiving, as it fails to reveal the models generalization capacity across speakers and gender. We demonstrate that performance evaluation across multiple test-sets and performance analysis across gender and speakers are useful in assessing usefulness of emotion models. Finally, we demonstrate that label uncertainty and data-skew pose a challenge to model evaluation, where instead of using the best hypothesis, it is useful to consider the 2- or 3-best hypotheses. | [
"Emotion Recognition",
"Foundation Model representation",
"Robustness"
] | Accept | https://openreview.net/pdf?id=mbHG1TPWq9 | https://openreview.net/forum?id=mbHG1TPWq9 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"VSveekrQrJ"
],
"note_type": [
"decision"
],
"note_created": [
1741192649099
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ly4kzPbIP8 | When Less is More: One Strategic Step in LLM Refinement | [] | Addressing hallucinations in LLMs for Math World Problems (MWPs) is key to reliability and efficiency. We optimize the trade-off between accuracy and computation in CoT reasoning by verifying only the first step before proceeding. A verifier assesses correctness, halting generation if incorrect. This approach reduces token generation time by 30\% with under 5\% accuracy loss, while corrections improve accuracy by up to 10\%. By skipping flawed reasoning early, our method balances accuracy and efficiency, cutting unnecessary computation. | [
"self-refinement",
"self-reasoning",
"maths",
"first-step"
] | Reject | https://openreview.net/pdf?id=ly4kzPbIP8 | https://openreview.net/forum?id=ly4kzPbIP8 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"Gcxoasy2Am"
],
"note_type": [
"decision"
],
"note_created": [
1741192615144
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Reject\", \"title\": \"Paper Decision\"}"
]
} |
l77JMZx3J9 | Overtrained Language Models Are Harder to Fine-Tune | [
"Jacob Mitchell Springer",
"Sachin Goyal",
"Kaiyue Wen",
"Tanishq Kumar",
"Xiang Yue",
"Sadhika Malladi",
"Graham Neubig",
"Aditi Raghunathan"
] | Large language models are pre-trained with an ever-increasing token budget operating under the largely unexamined premise that better pre-training performance translates to better downstream performance. In this work, we show that this widely-held assumption is in fact false! Pre-training on extremely large number of tokens eventually makes the model harder to fine-tune leading to worse downstream performance. For instance, after instruction tuning or multimodal fine tuning, OLMo-1B models pre-trained on 3T tokens under perform their 2.3T token counterpart by over $2\%$ on standard LLM benchmarks. Controlled experiments and theoretical analysis show that the phenomenon of catastrophic overtraining is both fundamental and universal. Our results suggest that as token budgets continue to scale, models will experience increasingly severe fine-tuning degradation across a wider range of tasks. This calls for a critical reassessment of pre-training design that takes into account the entire model lifecycle. | [
"pre-training",
"fine-tuning",
"catastrophic forgetting",
"transfer learning"
] | Accept | https://openreview.net/pdf?id=l77JMZx3J9 | https://openreview.net/forum?id=l77JMZx3J9 | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"QCeYNiV723"
],
"note_type": [
"decision"
],
"note_created": [
1741192745734
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
kPMfYS2ugs | Know Thy Judge: On the Robustness Meta-Evaluation of LLM Safety Judges | [
"Francisco Eiras",
"Eliott Zemour",
"Eric Lin",
"Vaikkunth Mugunthan"
] | Large Language Model (LLM) based judges form the underpinnings of key safety evaluation processes such as offline benchmarking, automated red-teaming, and online guardrailing. This widespread requirement raises the crucial question: *can we trust the evaluations of these evaluators?* In this paper, we highlight two critical challenges that are typically overlooked: (i) evaluations in the wild where factors like prompt sensitivity and distribution shifts can affect performance and (ii) adversarial attacks that target the judge. We highlight the importance of these through a study of commonly used safety judges, showing that small changes such as the style of the model output can lead to jumps of up to 0.24 in the false negative rate on the same dataset, whereas adversarial attacks on the model generation can fool some judges into misclassifying 100% of harmful generations as safe ones. These findings reveal gaps in commonly used meta-evaluation benchmarks and weaknesses in the robustness of current LLM judges, indicating that low attack success under certain judges could create a false sense of security. | [
"llm-as-judge",
"meta-evaluations",
"large language models"
] | Accept | https://openreview.net/pdf?id=kPMfYS2ugs | https://openreview.net/forum?id=kPMfYS2ugs | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"M7IFpQdM4O"
],
"note_type": [
"decision"
],
"note_created": [
1741192580382
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
j53Vs162RA | Last Layer Empirical Bayes | [
"Valentin Villecroze",
"Yixin Wang",
"Gabriel Loaiza-Ganem"
] | The task of quantifying the inherent uncertainty associated with neural network predictions is a key challenge in artificial intelligence. Bayesian neural networks (BNNs) and deep ensembles are among the most prominent approaches to tackle this task. Both approaches produce predictions by computing an expectation of neural network outputs over some distribution on the corresponding weights; this distribution is given by the posterior in the case of BNNs, and by a mixture of point masses for ensembles. Inspired by recent work showing that the distribution used by ensembles can be understood as a posterior corresponding to a learned data-dependent prior, we propose last layer empirical Bayes (LLEB). LLEB instantiates a learnable prior as a normalizing flow, which is then trained to maximize the evidence lower bound; to retain tractability we use the flow only on the last layer. We show why LLEB is well motivated, and how it interpolates between standard BNNs and ensembles in terms of the strength of the prior that they use. LLEB performs on par with existing approaches, highlighting that empirical Bayes is a promising direction for future research in uncertainty quantification. | [
"uncertainty quantification",
"empirical Bayes",
"normalizing flows",
"variational inference"
] | Accept | https://openreview.net/pdf?id=j53Vs162RA | https://openreview.net/forum?id=j53Vs162RA | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"AfW9tW8nnf"
],
"note_type": [
"decision"
],
"note_created": [
1741192714565
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
ilz2ghLgzt | On the Limitations of LLM-Synthesized Social Media Misinformation Moderation | [
"Sahajpreet Singh",
"Jiaying Wu",
"Svetlana Churina",
"Kokil Jaidka"
] | Despite significant advances in Large Language Models (LLMs), their effectiveness in social media misinformation moderation -- specifically in generating high-quality moderation texts with accuracy, coherence, and citation reliability comparable to human efforts like Community Notes (CNs) on X -- remains an open question. In this work, we introduce ModBench, a real-world misinformation moderation benchmark consisting of tweets flagged as misleading alongside their corresponding human-written CNs. We evaluate representative open- and closed-source LLMs on ModBench, prompting them to generate CN-style moderation notes with access to human-written CN demonstrations and relevant web-sourced references utilized by CN creators. Our findings reveal persistent and significant flaws in LLM-generated moderation notes, signaling the continued necessity of incorporating trustworthy human-written information to ensure accurate and reliable misinformation moderation. | [
"Misinformation",
"Content Moderation",
"Community Notes",
"LLMs"
] | Accept | https://openreview.net/pdf?id=ilz2ghLgzt | https://openreview.net/forum?id=ilz2ghLgzt | ICLR.cc/2025/Workshop/ICBINB | 2025 | {
"note_id": [
"cXKtLE0FUh"
],
"note_type": [
"decision"
],
"note_created": [
1741192445155
],
"note_signatures": [
[
"ICLR.cc/2025/Workshop/ICBINB/Program_Chairs"
]
],
"structured_content_str": [
"{\"decision\": \"Accept\", \"title\": \"Paper Decision\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.