forum_id
stringlengths
10
10
forum_title
stringlengths
3
179
forum_authors
sequencelengths
0
76
forum_abstract
stringlengths
1
3.52k
forum_pdf_url
stringlengths
0
49
note_id
stringlengths
10
10
note_type
stringclasses
6 values
note_created
int64
1,697B
1,737B
note_replyto
stringlengths
10
10
note_readers
sequencelengths
1
6
note_signatures
sequencelengths
1
1
note_text
stringlengths
10
45k
3JLtuCozOU
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
[ "Rui Xin", "Niloofar Mireshghallah", "Shuyue Stella Li", "Michael Duan", "Hyunwoo Kim", "Yejin Choi", "Yulia Tsvetkov", "Sewoong Oh", "Pang Wei Koh" ]
The release of sensitive data often relies on synthetic data generation and Personally Identifiable Information~(PII) removal, with an inherent assumption that these techniques ensure privacy. However, the effectiveness of sanitization methods for text datasets has not been thoroughly evaluated. To address this critical gap, we propose the first privacy evaluation framework for the release of sanitized textual datasets. In our framework, a sparse retriever initially links sanitized records with target individuals based on known auxiliary information. Subsequently, semantic matching quantifies the extent of additional information that can be inferred about these individuals from the matched records. We apply our framework to two datasets: MedQA, containing medical records, and WildChat, comprising individual conversations with ChatGPT. Our results demonstrate that seemingly innocuous auxiliary information, such as specific speech patterns, can be used to deduce personal attributes like age or substance use history from the synthesized dataset. We show that private information can persist in sanitized records at a semantic level, even in synthetic data. Our findings highlight that current data sanitization methods create a false sense of privacy by making only surface-level textual manipulations. This underscores the urgent need for more robust protection methods that address semantic-level information leakage.
/pdf/c35de3cdec682abb0d8e9524b8e2fcf4b27ee810.pdf
nD7l4fbv5L
official_review
1,728,351,150,391
3JLtuCozOU
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission235/Reviewer_CPsF" ]
title: Good work in privacy privacy assessment for LLMs review: This work has high quality, clear expression, and strong practical significance, which attempts to evaluate privacy purification methods. pros: 1. Adequate experiments and interesting key findings 2. The proposed semantic-level privacy is interesting and novel. 3. The introduction of human assessments to enhance credibility 4. The effectiveness of typical protection methods is tested and analyzed cons: 1. Inadequate privacy-preserving technology categories and the breadth of the dataset limit the generalizability of the results. 2. Why use this assessment not clarified? Does this kind of assessment guarantee an accurate assessment? 3. More importantly, “appropriate privacy metrics” need to be suggested for real publishing. According to the principle of "Protection as needed", smaller privacy indicators are not better 4. Some generative data privacy efforts can be added: Security and privacy on generative data in aigc: A survey;On protecting the data privacy of large language models (llms): A survey rating: 8 confidence: 4
ZXgvPANlwe
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
[ "Michael-Andrei Panaitescu-Liess", "Pankayaraj Pathmanathan", "Yigitcan Kaya", "Zora Che", "Bang An", "Sicheng Zhu", "Aakriti Agrawal", "Furong Huang" ]
As the capabilities of large language models (LLMs) continue to expand, their usage has become increasingly prevalent. However, as reflected in numerous ongoing lawsuits related to LLM-generated content, addressing copyright infringement remains a significant challenge. In this paper, we introduce the first data poisoning attack specifically designed to induce the generation of copyrighted content by an LLM, even when the model has not been directly trained on the specific copyrighted material. We find that a straightforward attack—which integrates small fragments of copyrighted text into the poison samples—is surprisingly effective at priming the models to generate copyrighted content. Moreover, we demonstrate that current defenses are insufficient and largely ineffective against this type of attack, underscoring the need for further exploration of this emerging threat model.
/pdf/074ab7503b2505caf2fcb89f9723ad599d7f0e6a.pdf
nehn0rZkM1
official_review
1,728,436,560,305
ZXgvPANlwe
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission237/Reviewer_CRji" ]
title: This work introduces a novel data poisoning attack to induce LLMs to output copyrighted data. review: Pros: 1) The work presents a compelling method to inject models with fragments of copyrighted data and eventually induce the LLM to reproduce larger portions of said text. 2) The paper provides a relevant albeit specific use case for this specific style of data poisoning attack 3) The paper demonstrates an ability to crack existing defense mechanisms, further validating the attack's effectiveness Cons: 1) Perhaps not necessary for a workshop paper, but my reasoning for not giving a higher score: the paper lacks some complexity and could do with more experiments to further back up its claims. rating: 7 confidence: 4
ZXgvPANlwe
PoisonedParrot: Subtle Data Poisoning Attacks to Elicit Copyright-Infringing Content from Large Language Models
[ "Michael-Andrei Panaitescu-Liess", "Pankayaraj Pathmanathan", "Yigitcan Kaya", "Zora Che", "Bang An", "Sicheng Zhu", "Aakriti Agrawal", "Furong Huang" ]
As the capabilities of large language models (LLMs) continue to expand, their usage has become increasingly prevalent. However, as reflected in numerous ongoing lawsuits related to LLM-generated content, addressing copyright infringement remains a significant challenge. In this paper, we introduce the first data poisoning attack specifically designed to induce the generation of copyrighted content by an LLM, even when the model has not been directly trained on the specific copyrighted material. We find that a straightforward attack—which integrates small fragments of copyrighted text into the poison samples—is surprisingly effective at priming the models to generate copyrighted content. Moreover, we demonstrate that current defenses are insufficient and largely ineffective against this type of attack, underscoring the need for further exploration of this emerging threat model.
/pdf/074ab7503b2505caf2fcb89f9723ad599d7f0e6a.pdf
7wG0IYE9M8
official_review
1,728,372,198,089
ZXgvPANlwe
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission237/Reviewer_uYTt" ]
title: The work proposes an attacking strategy capable of inducing LLMs to generate specific copyrighted text, and demonstrate that the strategy bypasses state-of-the-art defenses. review: **Summary:** The paper proposes an attacking strategy where the attacker cleverly induces LLMs to generate copyrighted data by feeding poisoned data in the training phase of LLM. In particular, the poison samples are created with a sliding window approach over fragments of copyrighted text which are embedded into new samples generated by LLMs. These samples are referred to as poison samples. This simple strategy triggers LLMs to produced the copyrighted data, which risks malicious attackers taking advantage of LLMs by deliberately forcing it to generate copyrighted data hoping for financial gains often from lawsuits. Authors demonstrate the effectiveness of the proposed strategy on BookMIA dataset while using LLaMA-7B. Authors demonstrate the performance of the attacking strategy based on poisoned data, and show that the model trained on poisoned data produced similar copyrighted text to that of of a model trained on actual copies of the targeted copyrighted data. This highlights a crucial vulnerability in state-of-the-art LLMs. The paper is coherent, well-written, and reads easily. Author did a god job identifying the gaps in literature and drawing analogy of poisoning attacks in LLMs to that in computer vision and text identification models. Authors also show that the existing state-of-the-art defense strategies are incapable of mitigating against the proposed simple attacking strategy. The latter is important in driving research community to think more about developing effective defense strategies **Strengths:** 1. Paper proposes a novel effective attacking strategy for LLMs, uncovering a major vulnerability. 2. Paper is well-written, and encourages research community to develop improved defense strategies. **Weaknesses:** 1. While the author have already proposed future work on developing an effective defense, the paper would greatly benefit if PoisonedParrot is presented with a defensive strategy. Presenting a simple attacking strategy, easy to replicate, without a defense can easily be misused by the malicious attackers. rating: 8 confidence: 3
jD1eWpUMOf
Pruning for Robust Concept Erasing in Diffusion Models
[ "Tianyun Yang", "Ziniu Li", "Juan Cao", "Chang Xu" ]
Despite the impressive capabilities of text-to-image diffusion models, they can also generate undesirable images, including not-safe-for-work content and copyrighted artworks. Recent studies have explored resolving this issue by fine-tuning model parameters to erase problematic concepts. However, existing methods exhibit a major flaw in robustness, as fine-tuned models often reproduce undesirable outputs when faced with cleverly crafted prompts. This reveals a fundamental limitation in current approaches and raises potential risks for deploying diffusion models in real-world scenarios. To bridge this gap, we show that concept-related hidden states, while deactivated by existing methods, can be reactivated under attacks, indicating incomplete and temporary blocking of concept generation path. In response, we introduce a simple yet efficient pruning-based framework for concept erasure. By integrating concept erasing and pruning into a single objective, our method effectively eliminating concept knowledge within models, while simultaneously cutting off pathways the pathways that could potentially reactivate the concept-related hidden states, ensuring robustness against adversarial prompts. Experiment results demonstrate a significant enhancement in our model's resilience to adversarial attacks. Compared with existing concept erasing methods, our method achieves about 30% improvement in erasing NSFW content and artwork style.
/pdf/f3c757fe3153a0136b1729211bf449583080137e.pdf
3I2N3ShOql
official_review
1,728,726,598,856
jD1eWpUMOf
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission238/Reviewer_gM9B" ]
title: This work addresses a critical challenge in text-to-image diffusion models: generating inappropriate or copyrighted content, especially when faced with adversarial prompts. The authors propose a pruning-based, variant method of ESD, which is simple and making sense. Comprehensive experiments were conducted to demonstrates this strategy's effectiveness. review: ### Strengths: 1. **Integration of Pruning and Erasing at Once**: The proposed strategy is an innovative approach ensuring concept erasing (especially for adversarial cases) by severing hidden state pathways, instead of directly finetuning on parameters. They also show that either pruning before erasing, or pruning after erasing works generally worse than integrating them into one single stage. 2. **Comprehensive Experiments**: The authors compare their method against multiple baselines, demonstrating its effectiveness. --- ### Suggestions: 1. A bit more Illustrations need to be added in the title of Figure 3, explaining what this figure shows, and I think this figure should be put on the top of page 5 or it might confuse readers. (before "Fig. 3 reveals the following mechanism: concept-erasing methods..." (line 151-158)) 2. In table 3, "UnlearnDiff+Garbage Truck", there is a lack of explanation about why P-ESD works worse than ESD 3. It would be better to include how different levels of pruning affect robustness or generation quality in appendix. --- ### Overall: I like this idea which is simple and working nicely. The experiments substantiate the effectiveness of your approach. rating: 8 confidence: 4
jD1eWpUMOf
Pruning for Robust Concept Erasing in Diffusion Models
[ "Tianyun Yang", "Ziniu Li", "Juan Cao", "Chang Xu" ]
Despite the impressive capabilities of text-to-image diffusion models, they can also generate undesirable images, including not-safe-for-work content and copyrighted artworks. Recent studies have explored resolving this issue by fine-tuning model parameters to erase problematic concepts. However, existing methods exhibit a major flaw in robustness, as fine-tuned models often reproduce undesirable outputs when faced with cleverly crafted prompts. This reveals a fundamental limitation in current approaches and raises potential risks for deploying diffusion models in real-world scenarios. To bridge this gap, we show that concept-related hidden states, while deactivated by existing methods, can be reactivated under attacks, indicating incomplete and temporary blocking of concept generation path. In response, we introduce a simple yet efficient pruning-based framework for concept erasure. By integrating concept erasing and pruning into a single objective, our method effectively eliminating concept knowledge within models, while simultaneously cutting off pathways the pathways that could potentially reactivate the concept-related hidden states, ensuring robustness against adversarial prompts. Experiment results demonstrate a significant enhancement in our model's resilience to adversarial attacks. Compared with existing concept erasing methods, our method achieves about 30% improvement in erasing NSFW content and artwork style.
/pdf/f3c757fe3153a0136b1729211bf449583080137e.pdf
5jMs2wsu8C
official_review
1,728,500,922,540
jD1eWpUMOf
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission238/Reviewer_9RKX" ]
title: The paper presents an effective pruning-based approach to enhance the robustness of concept erasure in diffusion models, showing significant improvements over existing methods but lacking deeper theoretical analysis and broader generalization. review: This paper proposes a pruning-based framework to improve the robustness of concept erasure in text-to-image diffusion models. The approach addresses the limitations of current fine-tuning-based methods by pruning key parameters in the neural network responsible for generating undesirable content, such as NSFW images or copyrighted artwork. The framework integrates both concept erasing and pruning into a unified objective, effectively preventing the reactivation of erased concepts under adversarial prompts. The experiments show significant improvements over state-of-the-art concept erasing techniques, particularly in resisting adversarial attacks. ### Pros - **Identifies and Addresses Key Gap**: The paper successfully pinpoints a critical flaw in current concept erasing methods—their vulnerability to adversarial prompts. By introducing pruning to concept erasure, the authors provide a clear solution that tackles this problem head-on. It’s commendable that the authors took a widely known issue and designed a method to mitigate it specifically in the context of diffusion models. - **Novel Use of Pruning in Concept Erasure**: Applying pruning techniques to concept erasure in generative models is innovative and demonstrates a novel application of an established method, commonly used in classification tasks, to generative models. This novel angle makes the approach stand out from other pruning applications. - **Practical and Efficient Method**: The pruning-based approach manages to remove concept-related pathways while maintaining the model’s performance. By pruning less than 0.01% of parameters, the method shows both efficiency and effectiveness, a strong point for real-world deployment where models must remain robust while minimizing computational overhead. ### Cons - **Limited Theoretical Depth**: While the method is effective, the theoretical reasoning behind why specific pruned parameters lead to robust concept erasure is not fully explored. The lack of detailed theoretical backing weakens the paper’s contribution, as readers are left to speculate about the underlying mechanisms of robustness. rating: 5 confidence: 1
jD1eWpUMOf
Pruning for Robust Concept Erasing in Diffusion Models
[ "Tianyun Yang", "Ziniu Li", "Juan Cao", "Chang Xu" ]
Despite the impressive capabilities of text-to-image diffusion models, they can also generate undesirable images, including not-safe-for-work content and copyrighted artworks. Recent studies have explored resolving this issue by fine-tuning model parameters to erase problematic concepts. However, existing methods exhibit a major flaw in robustness, as fine-tuned models often reproduce undesirable outputs when faced with cleverly crafted prompts. This reveals a fundamental limitation in current approaches and raises potential risks for deploying diffusion models in real-world scenarios. To bridge this gap, we show that concept-related hidden states, while deactivated by existing methods, can be reactivated under attacks, indicating incomplete and temporary blocking of concept generation path. In response, we introduce a simple yet efficient pruning-based framework for concept erasure. By integrating concept erasing and pruning into a single objective, our method effectively eliminating concept knowledge within models, while simultaneously cutting off pathways the pathways that could potentially reactivate the concept-related hidden states, ensuring robustness against adversarial prompts. Experiment results demonstrate a significant enhancement in our model's resilience to adversarial attacks. Compared with existing concept erasing methods, our method achieves about 30% improvement in erasing NSFW content and artwork style.
/pdf/f3c757fe3153a0136b1729211bf449583080137e.pdf
mh2KTnCCne
official_review
1,728,394,450,703
jD1eWpUMOf
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission238/Reviewer_ibjq" ]
title: The paper identifies why finetuning approaches regarding concept erasure are prone to adversarial attacks and resolve this using a mixed pruning-erasure based objective. Very well funded, novel arguments that are logically treated and resolved. review: Summary: The authors notice that finetuning based concept erasure methods are flawed, as they only deactivate certain diffusion paths without entirely removing them. To resolve this, they propose a pruning method to completely remove these problematic paths. The pruning strategy only requires masking very few of the weights (around 0.001%). The masks are trained simultaneously with a free to choose widespread concept erasure method. Strengths: - Clear explanation of why finetuning methods fail when adversially attacked (highly relevant) - Flexible training objective that can be used with a free-to-choose concept erasure objective - Strong results with very few weights masked, therefore maintaining most of the model capacities unharmed - Very self-critical paper, clearly stating and sicussing the possible limitations of the approach, much appreciated Weaknesses: - No clear objections General comment: I am surprised by how few of the weights need to be altered. I would be afraid that other adversarial paths undetected by current attack methods still exist, and that also removing these would require changing significantly more weights. I would have thought that there were "more paths to Rome", an analysis on that regard (even beyond the safety perspective) would be very valuable. Review summary: The paper identifies why finetuning approaches regarding concept erasure are prone to adversarial attacks and resolve this using a mixed pruning-erasure based objective. Very well funded, novel arguments that are logically treated and resolved. rating: 8 confidence: 4
cSAMf9czc4
Red Teaming Language-Conditioned Robot Models via Vision Language Models
[ "Sathwik Karnik", "Zhang-Wei Hong", "Nishant Abhangi", "Yen-Chen Lin", "Tsun-Hsuan Wang", "Pulkit Agrawal" ]
Language-conditioned robot models enable robots to perform a wide range of tasks based on natural language instructions. Despite strong performance on existing benchmarks, evaluating the safety and effectiveness of these models is challenging due to the complexity of testing all possible language variations. Current benchmarks have two key limitations: they rely on a limited set of human-generated instructions, missing many challenging cases, and they focus only on task performance without assessing safety, such as avoiding damage. To address these gaps, we introduce Embodied Red Teaming (ERT), a new evaluation method that generates diverse and challenging instructions to test these models. ERT uses automated red teaming techniques with Vision Language Models (VLMs) to create contextually grounded, difficult instructions. Experimental results show that state-of-the-art models frequently fail or behave unsafely on ERT tests, underscoring the shortcomings of current benchmarks in evaluating real-world performance and safety.
/pdf/cf80f98b9b097ed13fdf1d437b5b0af9b25933e0.pdf
hemXoMLbyL
official_review
1,728,538,539,222
cSAMf9czc4
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission239/Reviewer_4seu" ]
title: Embodied Red Teaming Framework for VLM-Guided Robots - Good analysis with room for improvement review: Review Overview:\ This work introduces the Embodied Red Teaming framework to test the capabilities and safety of existing VLM-guided robots. The framework is novel and the authors present a good analysis; however, the existing experiments could be strengthened and more experiments could bolster the argument.\ This review follows the sections of the paper. Strengths (+) and weaknesses (-) are noted for each section. Introduction:\ \+ Well-written, clear intro. Good examples in lines 34-35, 53, and 64.\ \- Lines 41-42: Is there an example of a method and dataset where the score drops that significantly? Such an experiment would strengthen this work's motivation.\ \- Red teaming bears resemblance to the concept of adversarial attacks in machine learning. Is this the same as red teaming? A brief statement of their similarity or difference would be informative as most readers are aware of "adversarial attacks" in ML. Preliminaries:\ Embodied Red Teaming:\ \- In Equation 1, what's R? A reward function? Later it is called a metric function; maybe move that description up here where Equation 1 is defined. Experiments:\ \- The naive baseline is a good idea, but the work could benefit from more such baselines. For example, randomly adding and deleting a word to give a noisy instruction, substituting synonyms for the main verb or noun, etc.\ \- Table 1 would be more meaningful if the rows contained the same task. So ERT was for the same tasks as the CALVIN or RLBench, then we could see how ERT differs in its wordings. Right now, it is hard to tell whether ERT phrasing is more challenging because the reader doesn't know what the original prompt was. Maybe make two separate tables side by side, one comparing CALVIN and ERT and the other RLBench and ERT. Adding a rephrase column would also be helpful!\ \- The diversity scores experiment is good; however, in Figure 3, I would keep the scales of the 3 plots the same. Right now at first glance, it looks like the difference between the green bar and blue bar is about the same in the BLEU and CLIP plot, which is misleading. Because of the scales, it's actually a big difference which the author correctly points out in the caption. Discussion and Analysis:\ \- The results in line 322 or figures a and b are not very interesting, as without fine-tuning the robot is of course expected to follow any instructions given. It would have been more interesting if you attempt to fine-tune for this and see if the robot rejects unsafe instructions or if there are still any unsafe instructions that can be found that it would perform (maybe adapt ERT to provide unsafe instructions instead of a more difficult phrasing).\ \- For Line 366 figures c and d, how are these neutral instructions found? If they were part of one of the datasets, that would make this point more impactful. Characterizing how often a neutral instruction is unsafe within the dataset would be ideal.\ \- The example around line 387 is making me rethink whether ERT is too difficult. If someone asked me to "Suppress the glow" of an LED, I probably would not think to turn it off, but I would point it in the other direction or put a bedsheet over it or something to make it dimmer. It would be interesting to do a user study of ERT and see if the users can actually complete the task correctly given the task phrasing.\ \- I wonder if the length of the instruction has something to do with the poor performance. It seems like the ERT instructions tend to be longer, even in Table 1. Maybe this is an "ablation" that should be explored.\ \+ The idea of "embodied similarity" is a very sound conclusion to draw. There is room for more explanation and more experiments to prove it though, other than these qualitative observations of the ERT outputs. Maybe ask GPT to produce instructions that are "human-centric" and then measure how well that performs. Conclusion:\ \+ Good structure, addressing interested audiences and presenting limitations concisely. rating: 7 confidence: 3
cSAMf9czc4
Red Teaming Language-Conditioned Robot Models via Vision Language Models
[ "Sathwik Karnik", "Zhang-Wei Hong", "Nishant Abhangi", "Yen-Chen Lin", "Tsun-Hsuan Wang", "Pulkit Agrawal" ]
Language-conditioned robot models enable robots to perform a wide range of tasks based on natural language instructions. Despite strong performance on existing benchmarks, evaluating the safety and effectiveness of these models is challenging due to the complexity of testing all possible language variations. Current benchmarks have two key limitations: they rely on a limited set of human-generated instructions, missing many challenging cases, and they focus only on task performance without assessing safety, such as avoiding damage. To address these gaps, we introduce Embodied Red Teaming (ERT), a new evaluation method that generates diverse and challenging instructions to test these models. ERT uses automated red teaming techniques with Vision Language Models (VLMs) to create contextually grounded, difficult instructions. Experimental results show that state-of-the-art models frequently fail or behave unsafely on ERT tests, underscoring the shortcomings of current benchmarks in evaluating real-world performance and safety.
/pdf/cf80f98b9b097ed13fdf1d437b5b0af9b25933e0.pdf
B6mG2LN2Ux
official_review
1,728,533,273,214
cSAMf9czc4
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission239/Reviewer_CbJp" ]
title: The paper aligns well with the workshop theme, using VLMs to generate challenging instructions for stress testing robot models. However, the approach lacks originality/signifcance, as LLMs and VLMs have already been widely used for similar tasks. Due to this, I am rating the paper as marginally below the acceptance threshold. review: **Strength** _Alignment with workshop:_ The paper aligns really well with the theme of the workshop. The paper proposes the use of VLM to generate challenging instructions for stress testing language-conditioned robot models. The paper also refines the generated challenging instructions by adding the previously generated challenging instructions to the VLM prompt. _Writing style and clarity:_ The paper is well written and all the details are clearly laid out. --- **Weakness** _Originality and Significance:_ The biggest weakness of the paper is its significance. The proposed approach is not new. LLM's (including VLM's) have been used to generate synthetic data for wide range of applications like model training, stress testing. LLM's (auto raters) are also used for evaluating the output of other LLM's. Therefore I feel the contributions in this paper are not very significant. Because of this drawback I am rating this paper 5 (Marginally below acceptance threshold) rating: 5 confidence: 4
cSAMf9czc4
Red Teaming Language-Conditioned Robot Models via Vision Language Models
[ "Sathwik Karnik", "Zhang-Wei Hong", "Nishant Abhangi", "Yen-Chen Lin", "Tsun-Hsuan Wang", "Pulkit Agrawal" ]
Language-conditioned robot models enable robots to perform a wide range of tasks based on natural language instructions. Despite strong performance on existing benchmarks, evaluating the safety and effectiveness of these models is challenging due to the complexity of testing all possible language variations. Current benchmarks have two key limitations: they rely on a limited set of human-generated instructions, missing many challenging cases, and they focus only on task performance without assessing safety, such as avoiding damage. To address these gaps, we introduce Embodied Red Teaming (ERT), a new evaluation method that generates diverse and challenging instructions to test these models. ERT uses automated red teaming techniques with Vision Language Models (VLMs) to create contextually grounded, difficult instructions. Experimental results show that state-of-the-art models frequently fail or behave unsafely on ERT tests, underscoring the shortcomings of current benchmarks in evaluating real-world performance and safety.
/pdf/cf80f98b9b097ed13fdf1d437b5b0af9b25933e0.pdf
CwD0a0U2xq
official_review
1,728,448,507,144
cSAMf9czc4
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission239/Reviewer_Sewr" ]
title: Solid paper, but not inspiring enough for a workshop presentation review: ### Summary The paper introduces a novel evaluation framework for language-conditioned robot models using VLMs to generate diverse and challenging instructions. ERT aims to assess the safety and effectiveness of these models beyond conventional benchmarks by creating instructions that test the robot's ability to handle complex and unforeseen scenarios. However, the paper itself is more like prompt engineering, which might not be able to facilitate the workshop's discussion board. ### Strengths - ERT introduces a systematic approach to generate and test instructions that reflect real-world complexities. - By focusing on safety and the ability to handle novel scenarios, ERT addresses critical gaps in current robotic model evaluations. - The integration of vision and language processing to generate context-specific instructions is a notable technical advancement. - The paper provides really comprehensive appendix of the prompts (nearly 100 pages), which is of great effort. ### Weaknesses - The need for sophisticated setups involving VLMs and dynamic instruction generation could complicate the adoption of ERT. - While effective, the current implementation of ERT is primarily tested on manipulation tasks, and its applicability to other types of robotic operations remains to be explored. rating: 5 confidence: 4
zKhpACcPdb
H-Space Sparse Autoencoders
[ "Ayodeji Ijishakin", "Ming Liang Ang", "Levente Baljer", "Daniel Chee Hian Tan", "Hugo Laurence Fry", "Ahmed Abdulaal", "Aengus Lynch", "James H. Cole" ]
In this work, we introduce a computationally efficient method that allows Sparse Autoencoders (SAEs) to automatically detect interpretable directions within the latent space of diffusion models. We show that intervening on a single neuron in SAE representation space at a single diffusion time step leads to meaningful feature changes in model output. This marks a step toward applying techniques from mechanistic interpretability to controlling the outputs of diffusion models, further ensuring the safety of their generations. As such, we establish a connection between safety/interpretability methods from language modelling and image generative modelling.
/pdf/7909238d9764aef80bdcdde767534114da147cdf.pdf
yugsEmFu0U
official_review
1,728,602,540,572
zKhpACcPdb
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission240/Reviewer_qtfg" ]
title: Paper Review review: The paper presents H-Space Sparse Autoencoders (SAEs) as a method to improve the interpretability and safety of diffusion models. It focuses on leveraging the latent space within the U-Net architecture of diffusion models, known as “H-Space,” to enable controllable and interpretable modifications to generated images. Notable contributions include the introduction of the Channel-Aware SAE and interpretable modification techniques. Both quantitative and qualitative assessments are conducted on multiple datasets (CELEB-A, CELEBA-HQ, and FFHQ), where the method demonstrates competitive performance, slightly surpassing diffusion-based models such as Diff-AE and DiTi, as well as traditional models like β-TCVAE. Additionally, the paper’s use of thresholding and activation sparsity illustrates how SAEs can isolate specific features with high precision. rating: 7 confidence: 3
zKhpACcPdb
H-Space Sparse Autoencoders
[ "Ayodeji Ijishakin", "Ming Liang Ang", "Levente Baljer", "Daniel Chee Hian Tan", "Hugo Laurence Fry", "Ahmed Abdulaal", "Aengus Lynch", "James H. Cole" ]
In this work, we introduce a computationally efficient method that allows Sparse Autoencoders (SAEs) to automatically detect interpretable directions within the latent space of diffusion models. We show that intervening on a single neuron in SAE representation space at a single diffusion time step leads to meaningful feature changes in model output. This marks a step toward applying techniques from mechanistic interpretability to controlling the outputs of diffusion models, further ensuring the safety of their generations. As such, we establish a connection between safety/interpretability methods from language modelling and image generative modelling.
/pdf/7909238d9764aef80bdcdde767534114da147cdf.pdf
T8QOvvU5gt
official_review
1,728,490,331,092
zKhpACcPdb
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission240/Reviewer_WYCZ" ]
title: Overall, an interesting work which has a large space to improve to provide a richer understanding review: Paper introduces an interesting concept of H-space SAEs for intervening in diffusion model generation to show single neuron intervention provides significant feature changes in the generated output. The work provides valuable future directions of improvement, such as the choice of $t=0.7T$. It would be interesting to see the change in results as the choice of t moves in both directions, and also a richer understanding of the interventions. Strengths: - Interesting idea and links to work in Language modelling - Good quantitative and qualitative results - posed and solved the problem of computational limitations in Hspace of Unets Weaknesses: - 3.4.1 and 3.4.2 are a little bit hard to follow and the algorithms 1 & 2 could benefit from more descriptive information about what happens at each line as otherwise the reader needs to keep in mind a lot of notation to follow. - There doesn't seem to be an explanation on why the middle of the Unet is used as the Hspace, this may be common knowledge in the specific area but could be beneficial to state why. - The intervention in the Hspace seems a little bit ad hoc and could do with explaining how and why specific interventions are chosen, if there are any. Minor Issues: - line 79: deep diffusion implicit Models should be Denoising Diffusion Implicit Models - Sec 3.4.2: A few mistakes in this section, unfinished eq (L.152), forward process (L.153) rating: 6 confidence: 3
zKhpACcPdb
H-Space Sparse Autoencoders
[ "Ayodeji Ijishakin", "Ming Liang Ang", "Levente Baljer", "Daniel Chee Hian Tan", "Hugo Laurence Fry", "Ahmed Abdulaal", "Aengus Lynch", "James H. Cole" ]
In this work, we introduce a computationally efficient method that allows Sparse Autoencoders (SAEs) to automatically detect interpretable directions within the latent space of diffusion models. We show that intervening on a single neuron in SAE representation space at a single diffusion time step leads to meaningful feature changes in model output. This marks a step toward applying techniques from mechanistic interpretability to controlling the outputs of diffusion models, further ensuring the safety of their generations. As such, we establish a connection between safety/interpretability methods from language modelling and image generative modelling.
/pdf/7909238d9764aef80bdcdde767534114da147cdf.pdf
NRWUtemoXQ
official_review
1,728,449,142,617
zKhpACcPdb
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission240/Reviewer_ReSa" ]
title: Reasonable paper review: Due to paucity of time, a full review could not be conducted. Are the results in Table 1 and 2 statistically significant? rating: 6 confidence: 1
jGtL0JFdeD
On a Spurious Interaction between Uncertainty Scores and Answer Evaluation Metrics in Generative QA Tasks
[ "Andrea Santilli", "Miao Xiong", "Michael Kirchhof", "Pau Rodriguez", "Federico Danieli", "Xavier Suau", "Luca Zappella", "Sinead Williamson", "Adam Golinski" ]
Knowing when a language model is uncertain about its generations is a key challenge for enhancing LLMs’ safety and reliability. An increasing issue in the field of Uncertainty Quantification (UQ) for Large Language Models (LLMs) is that the performance values reported across papers are often incomparable, and sometimes even conflicting, due to different evaluation protocols. In this paper, we highlight that some UQ methods and answer evaluation metrics are spuriously correlated via the response length, which leads to falsely elevated performances of uncertainty scores that are sensitive to response length, such as sequence probability. We perform empirical evaluations according to two different protocols in the related literature, one using a substring-overlap-based evaluation metric, and one using an LLM-as-a-judge approach, and show that the conflicting conclusions between these two works can be attributed to this interaction.
/pdf/6a46da66775c9c8eef30ef09fd64a993a88547ba.pdf
I82rGDzSaR
official_review
1,728,529,814,393
jGtL0JFdeD
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission242/Reviewer_1yPd" ]
title: Interesting exploration of ROUGE-L's biases for judging correctness for selective prediction, but findings could be more informative. review: This paper investigates apparent inconsistencies in the findings of 2 papers on uncertainty quantifications in LLMs. Experiments use a number of different methods (e.g., entropy-based ones, sequence probabilities, and learned selection functions) with 7 models of varying sizes. The aim of the exploration is to examine whether the performance of sequence probabilities may be due to other confounding factors. **Strengths** - The explorations of the ROUGE-L metric may be useful to the community. Particularly illuminating the favoring of higher scores is informative and could warrant similar explorations with other metrics. This kind of critical examination of metrics also adds to the growing body of work re-evaluating commonly held conclusions of LLMs performance due to metric choices (e.g., https://arxiv.org/abs/2304.15004), but in the context of selective prediction. - There is a wide breadth of models and uncertainty quantification methods considered in the experiments, which helps makes the results more comprehensive. The methods used vary quite widely and cover many prevalent unceftainty quantificaiton techniques. **Weaknesses** - The effectiveness of LLM-as-a-judge has been established (e.g., https://arxiv.org/abs/2306.05685), while the drawbacks of n-gram matching metrics have also been illustrated in prior work (e.g., https://arxiv.org/abs/2303.16634, https://aclanthology.org/J09-4008). While the problem setting may be different, the usage of these metrics in this work is still quite similar. Hence, it is intuitive that the findings may also be similar, given that abstention performance is linked to task performance. It's not entirely clear how informative of a finding this might be to the community. - In a similar vein to the previous point, the conclusions of [10] and [11] do not appear to be as at odds as this paper purports. To quote from [10]: "For LLaMA, there is no clear advantage for any of the methods considered." In that same paper, we can see significant variance in the method rankings between ROUGE-L and BERTScore. - It could be interesting for those trying to learn from this work to see variation across benchmarks. These results could be in the appendix, but it's plausible that changing the task and evaluation set could yield noticeable differences (e.g., https://arxiv.org/abs/2306.08751). - Some claims could be adjusted or better supported: For lines 39-44, there do not appear to be any explorations of different temperatures or implementation comparisons, nor budget considerations for evaluation sets. rating: 5 confidence: 4
jGtL0JFdeD
On a Spurious Interaction between Uncertainty Scores and Answer Evaluation Metrics in Generative QA Tasks
[ "Andrea Santilli", "Miao Xiong", "Michael Kirchhof", "Pau Rodriguez", "Federico Danieli", "Xavier Suau", "Luca Zappella", "Sinead Williamson", "Adam Golinski" ]
Knowing when a language model is uncertain about its generations is a key challenge for enhancing LLMs’ safety and reliability. An increasing issue in the field of Uncertainty Quantification (UQ) for Large Language Models (LLMs) is that the performance values reported across papers are often incomparable, and sometimes even conflicting, due to different evaluation protocols. In this paper, we highlight that some UQ methods and answer evaluation metrics are spuriously correlated via the response length, which leads to falsely elevated performances of uncertainty scores that are sensitive to response length, such as sequence probability. We perform empirical evaluations according to two different protocols in the related literature, one using a substring-overlap-based evaluation metric, and one using an LLM-as-a-judge approach, and show that the conflicting conclusions between these two works can be attributed to this interaction.
/pdf/6a46da66775c9c8eef30ef09fd64a993a88547ba.pdf
a7iy75rdAx
official_review
1,728,486,371,031
jGtL0JFdeD
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission242/Reviewer_TizB" ]
title: Great work identifying discrepancies between evaluations, pushing for greater reproducibility review: quality ------- Pros - The LLMs and datasets are well-chosen - The critique is both thorough in its breadth, and well-targeted. - The comparison of Rouge-L and LLM-grader to human annotation (and high inter-human agreement) is clear, reliable and decisive. The identification of length-bias is also satisfying. Cons - What I'd want to see, at the end of a critical work like this, is a list of recommendations. The critique is well-performed, but I'm disappointed that after a thorough review of the parameters which can affect an evaluation the only actionable result I come away with (I may have missed something) is "use an LLM grader, not Rouge-L". clarity ------- Pros - The introduction is wonderfully written, and a delight to read. - The layout is very clear, and the sections well-defined and signposted. Cons - The lengthy introduction to the work's formalism (§3) is something I'd expect to see at the start of a thesis, rather than a conference paper. In particular, it's not clear to me that the overview of sampling methods is targeted and succinct, perhaps more signposting would help highlight this section's relevance to the experiments. While part of the work of the paper is to emphasise the breadth of UQ options, details could be moved to the appendix. - Despite the extensive formalism, no summary detail is given for Rogue-L itself. originality ---------- Pros - I know of no similar literature review or analysis of reproducibility significance ------------ Pros - It is, as the authors say, of paramount importance to make sense of the conflicting claims of the performance landscape of the various UQ methods in the literature, to enable reproducability and reliability. - The paper's focus on the discrepancy between [10] and [11] is keenly targeted, and results in a satisfying attribution. - The recommendation to use LLM graders is clear and supported. Cons - I hadn't really heard of Rouge-L before now, so I wouldn't have expected it to be a particularly significant/prevalent foil for LLM-graders. rating: 8 confidence: 4
QKRLH57ATT
Efficient and Effective Uncertainty Quantification for LLMs
[ "Miao Xiong", "Andrea Santilli", "Michael Kirchhof", "Adam Golinski", "Sinead Williamson" ]
Uncertainty quantification (UQ) is crucial for ensuring the safe deployment of large language model, particularly in high-stakes applications where hallucinations can be harmful. However, existing UQ methods often demand substantial computational resources, e.g., multi-sample methods such as Semantic Entropy usually require 5-10 inference calls, and probing-based methods require additional datasets for training. This raises a key question: How can we balance UQ performance with computational efficiency} In this work, we first analyze the performance and efficiency of various UQ methods across 6 datasets x 6 models x 2 prompt strategies. Our findings reveal that: 1) Multi-sample methods generally perform only marginally better than single-sample methods, i.e., ≤ 0.02 in AUROC over 65% settings, despite significantly higher inference costs. 2) Probing-based methods perform well primarily on mathematical reasoning and truthfulness benchmarks, while multi-sample methods only show a clear advantage on knowledge-seeking tasks. These findings suggest that the high computational cost does not translate into significant performance gains. Despite their similar overall performance, we observe only moderate correlations between different UQ methods, suggesting they may be capturing different uncertainty signals. This motivates us to explore the potential of combining different methods to harness their complementary strengths at lower computational costs. Our experiments demonstrate that a simple combination of single-sample features can match or even outperform the existing best-performing methods. These findings suggest a promising direction for developing cost-effective uncertainty estimators.
/pdf/c22ab9020ab03934edcb50fd393fe79f77e2cd12.pdf
7JkH2yrbsT
official_review
1,728,515,875,263
QKRLH57ATT
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission243/Reviewer_HMHU" ]
title: nice paper, novel analysis review: The paper compares existing LLM uncertainty quantification methods regarding their computational cost and performance. The paper then compare the difference between signals learned by different methods and test combined methods according to the correlation of signals. The paper is well written. The analysis of computational cost is novel, but the significance of combined method is not convincing enough. Pros 1. The paper is clearly written and presented. 2. It is novel to compare the tradeoff between computational cost and performance of uncertainty quantification methods. Cons 1. Section 4 is not clear. What signal is compared in the correlation plots? Distribution of predicted uncertainty? 2. The improvement of combined methods in Fig 3 does not seem significant. rating: 9 confidence: 3
QKRLH57ATT
Efficient and Effective Uncertainty Quantification for LLMs
[ "Miao Xiong", "Andrea Santilli", "Michael Kirchhof", "Adam Golinski", "Sinead Williamson" ]
Uncertainty quantification (UQ) is crucial for ensuring the safe deployment of large language model, particularly in high-stakes applications where hallucinations can be harmful. However, existing UQ methods often demand substantial computational resources, e.g., multi-sample methods such as Semantic Entropy usually require 5-10 inference calls, and probing-based methods require additional datasets for training. This raises a key question: How can we balance UQ performance with computational efficiency} In this work, we first analyze the performance and efficiency of various UQ methods across 6 datasets x 6 models x 2 prompt strategies. Our findings reveal that: 1) Multi-sample methods generally perform only marginally better than single-sample methods, i.e., ≤ 0.02 in AUROC over 65% settings, despite significantly higher inference costs. 2) Probing-based methods perform well primarily on mathematical reasoning and truthfulness benchmarks, while multi-sample methods only show a clear advantage on knowledge-seeking tasks. These findings suggest that the high computational cost does not translate into significant performance gains. Despite their similar overall performance, we observe only moderate correlations between different UQ methods, suggesting they may be capturing different uncertainty signals. This motivates us to explore the potential of combining different methods to harness their complementary strengths at lower computational costs. Our experiments demonstrate that a simple combination of single-sample features can match or even outperform the existing best-performing methods. These findings suggest a promising direction for developing cost-effective uncertainty estimators.
/pdf/c22ab9020ab03934edcb50fd393fe79f77e2cd12.pdf
MypUzRytAr
official_review
1,728,509,749,235
QKRLH57ATT
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission243/Reviewer_cED6" ]
title: The paper shows that single-sample UQ methods can match multi-sample ones in efficiency and performance, with promising insights for real-world use review: **Pros** : - well written - clear explanation of experiments and settings - clear interpretation of results - Comprehensive Analysis - novel insights **Cons**: - Fig. 1 is hard to understand (it would be better to make it clearer) - Limited Discussion on Limitations rating: 7 confidence: 5
QKRLH57ATT
Efficient and Effective Uncertainty Quantification for LLMs
[ "Miao Xiong", "Andrea Santilli", "Michael Kirchhof", "Adam Golinski", "Sinead Williamson" ]
Uncertainty quantification (UQ) is crucial for ensuring the safe deployment of large language model, particularly in high-stakes applications where hallucinations can be harmful. However, existing UQ methods often demand substantial computational resources, e.g., multi-sample methods such as Semantic Entropy usually require 5-10 inference calls, and probing-based methods require additional datasets for training. This raises a key question: How can we balance UQ performance with computational efficiency} In this work, we first analyze the performance and efficiency of various UQ methods across 6 datasets x 6 models x 2 prompt strategies. Our findings reveal that: 1) Multi-sample methods generally perform only marginally better than single-sample methods, i.e., ≤ 0.02 in AUROC over 65% settings, despite significantly higher inference costs. 2) Probing-based methods perform well primarily on mathematical reasoning and truthfulness benchmarks, while multi-sample methods only show a clear advantage on knowledge-seeking tasks. These findings suggest that the high computational cost does not translate into significant performance gains. Despite their similar overall performance, we observe only moderate correlations between different UQ methods, suggesting they may be capturing different uncertainty signals. This motivates us to explore the potential of combining different methods to harness their complementary strengths at lower computational costs. Our experiments demonstrate that a simple combination of single-sample features can match or even outperform the existing best-performing methods. These findings suggest a promising direction for developing cost-effective uncertainty estimators.
/pdf/c22ab9020ab03934edcb50fd393fe79f77e2cd12.pdf
NgAieH0Ork
official_review
1,728,502,002,521
QKRLH57ATT
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission243/Reviewer_saKA" ]
title: Review of "Efficient and Effective Uncertainty Quantification for LLMs" review: ### **Summary** The paper proposes an efficient and effective approach for uncertainty quantification (UQ) in large language models (LLMs), particularly addressing the high computational costs associated with existing UQ methods. It explores single-sample, multi-sample, and probing-based methods across various datasets, finding that while multi-sample methods only slightly outperform single-sample methods, they do so at a much higher computational cost. The paper introduces a method that combines single-sample features to achieve similar or better performance than multi-sample approaches, offering a more cost-effective solution. --- ### **Strengths** 1. The paper provides a comprehensive evaluation of different UQ methods across multiple models, datasets, and prompt settings, offering insights into the performance and cost trade-offs. 2. By focusing on single-sample methods and their combinations, the paper presents a practical approach that significantly reduces computational expenses without compromising performance. 3. The idea of combining single-sample, probing-based, and multi-sample methods into a unified framework to leverage complementary strengths is simple but shows good performance. --- ### **Weaknesses** 1. While the integration approach is promising, the individual techniques explored (single-sample, multi-sample, and probing-based) are already well-established. The paper’s primary contribution appears to be the combination rather than the development of new methods. 2. The paper emphasizes computational cost throughout, but it does not include experiments analyzing or quantifying the computational complexity. The experiments focus solely on performance, which is insufficient for an analytical study that aims to address efficiency concerns. 3. The paper claims that probing-based methods require additional datasets and impose computational overhead. However, in practice, the training of probes can be extremely fast, depending on the size of the probe itself. Therefore, this aspect may not be as significant a limitation as suggested. rating: 5 confidence: 3
s6P4W7QTD4
The Probe Paradigm: A Theoretical Foundation for Explaining Generative Models
[ "Amit Kiran Rege" ]
To understand internal representations in generative models, there has been a long line of research of using \emph{probes} i.e. shallow binary classifiers trained on the model's representations to indicate the presence/absence of human-interpretable \emph{concepts}. While the focus of much of this work has been empirical, it is important to establish rigorous guarantees on the use of such methods to understand its limitations. To this end, we introduce a formal framework to theoretically study explainability in generative models using probes. We discuss the applicability of our framework to number of practical models and then, using our framework, we are able to establish theoretical results on sample complexity and the limitations of probing in high-dimensional spaces. Then, we prove results highlighting significant limitations in probing strategies in the worst case. Our findings underscore the importance of cautious interpretation of probing results and imply that comprehensive auditing of complex generative models might be hard even with white box access to internal representations.
/pdf/bf20241d4584b97dac870271a66772d0edde98d2.pdf
MyviM53M6W
official_review
1,728,714,581,398
s6P4W7QTD4
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission244/Reviewer_mLeg" ]
title: This manuscript provides a clear formal framework for explainability in generative models using probes, but it would benefit from a case study with popular models and a more thorough discussion of related works. Minor revisions are needed for formatting issues. review: This work explores the theoretical explainability of generative models using probes, presenting a versatile framework with accompanying proofs. Some analysis highlights the limitations of the proposed approach. Overall, the manuscript is well-structured, with a particularly clear definition of the formal framework. The section mapping LLMs and Diffusion models is highly useful. However, while the propositions are straightforward, it would strengthen the paper to include a case study with popular models to further demonstrate the applicability of the framework. The discussion of related works is not extensive enough. Please consider elaborating on existing probe-related studies and emphasizing how the proposed framework differentiates itself from prior work. Minor issues: Please update the header and correct 'Appendix ??' on Page 4, line 201. rating: 5 confidence: 3
s6P4W7QTD4
The Probe Paradigm: A Theoretical Foundation for Explaining Generative Models
[ "Amit Kiran Rege" ]
To understand internal representations in generative models, there has been a long line of research of using \emph{probes} i.e. shallow binary classifiers trained on the model's representations to indicate the presence/absence of human-interpretable \emph{concepts}. While the focus of much of this work has been empirical, it is important to establish rigorous guarantees on the use of such methods to understand its limitations. To this end, we introduce a formal framework to theoretically study explainability in generative models using probes. We discuss the applicability of our framework to number of practical models and then, using our framework, we are able to establish theoretical results on sample complexity and the limitations of probing in high-dimensional spaces. Then, we prove results highlighting significant limitations in probing strategies in the worst case. Our findings underscore the importance of cautious interpretation of probing results and imply that comprehensive auditing of complex generative models might be hard even with white box access to internal representations.
/pdf/bf20241d4584b97dac870271a66772d0edde98d2.pdf
ERHoR5fEPb
official_review
1,728,555,187,593
s6P4W7QTD4
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission244/Reviewer_qfMH" ]
title: Nice paper studying theoretical properties of probing review: The overall construction of the paper is nice. Even though the work is very theoretical I think it could benefit with some concrete real world or synthetic examples of the outcomes of the theoretical results. Furthermore, its not clear why model-agnostic framework is the best choice for studying generative models as they all contain very different hidden states i.e diffusion models are tasked with reconstructing noised representations at a certain hidden states, whereas layers within an LLM may focus on different aspects of a piece of text which to me seems very different and will therefore have varying implications with this papers theoretical results. Strengths: - Clearly defined problem statement and clearly proposed framework - Solid theory which is straightforward to follow with good description about the results Weaknesses: - Examples for generative model types where the theory leads to results in real-world/synthetic set ups - Clear motivation for the use of Model-agnostic framework being a preferred choice, I believe it is powerful for a framework to be model agnostic but it shouldn't oversimplify the definition of the different types of generative models to fit within the framework. - More explanation on when the size of $d$ becomes an issue, i.e how high dimensional ? when can we expect this in some common models. rating: 7 confidence: 3
s6P4W7QTD4
The Probe Paradigm: A Theoretical Foundation for Explaining Generative Models
[ "Amit Kiran Rege" ]
To understand internal representations in generative models, there has been a long line of research of using \emph{probes} i.e. shallow binary classifiers trained on the model's representations to indicate the presence/absence of human-interpretable \emph{concepts}. While the focus of much of this work has been empirical, it is important to establish rigorous guarantees on the use of such methods to understand its limitations. To this end, we introduce a formal framework to theoretically study explainability in generative models using probes. We discuss the applicability of our framework to number of practical models and then, using our framework, we are able to establish theoretical results on sample complexity and the limitations of probing in high-dimensional spaces. Then, we prove results highlighting significant limitations in probing strategies in the worst case. Our findings underscore the importance of cautious interpretation of probing results and imply that comprehensive auditing of complex generative models might be hard even with white box access to internal representations.
/pdf/bf20241d4584b97dac870271a66772d0edde98d2.pdf
Xp7X2cUzRG
official_review
1,728,514,458,286
s6P4W7QTD4
[ "everyone" ]
[ "NeurIPS.cc/2024/Workshop/SafeGenAi/Submission244/Reviewer_XsE3" ]
title: Formal framework and theorems for informed application of probing to generative model internal representations review: Thank you for submitting to the NeurIPS SafeGenAI workshop! Your work identifies a critical but often overlooked gap in the application of high-performing, yet frequently uninterpreted, probing classifiers—a topic of growing importance in AI safety and security research. First, I encourage you to take a more assertive stance on the need for researchers and peer reviewers to expect basic interpretability checks and common-sense validations in both linear and MLP-based probes. The field currently has a proliferation of high-performing linear probes with minimal interpretability assessments, resulting in a gap similar to the state of wildlife biology literature in the late 2000s, which lacked such foundational checks (see [2007 example](https://wildlife.onlinelibrary.wiley.com/doi/10.2193/2006-285)). While your paper emphasizes the interpretability of MLP probes, it is equally essential to apply interpretability measures to simpler linear probes to ensure their relevance and reliability. Secondly, while your framework and its implications are valuable, the paper’s narrative would benefit from greater cohesion. Initially, it appears to address the need for interpretability in probes; however, it shifts toward presenting a formal framework for multi-level concept attribution, with occasional oscillation between the two themes. A more unified central message could enhance the paper’s clarity and impact. **More specific and technical feedback:** - *Definition of "fine-grained concepts":* This term lacks clarity. For example, does it encompass subjective notions like "honesty" or "happiness," as explored in the RepEng paper? - *Have you considered inverse probing, where the probe aims to identify the absence rather than the presence of concepts? - The statement "linear probes have a lower risk of overfitting but may miss complex concept encodings" could be expanded. For example, why do linear probes perform well in detecting concepts that shouldn’t be linearly separable? A recent paper shows that even a simple logistic regression can achieve high ROC AUC when detecting prompt injections ([example](https://arxiv.org/abs/2406.00799)). - Theorem 1 assumes hidden representations drawn from a standard normal distribution, simplifying the mathematics but potentially limiting applicability to real-world generative model representations, which are often structured or clustered. Is this assumption realistic in your view? - Strengthen your theoretical insights by empirically evaluating them. For instance, what evidence supports your claims about sample complexity and overfitting? - Proposition 2 states that a maximum of *d* independent concepts can be detected by linear probes in a *d*-dimensional space, based on hyperplane independence. This overlooks cases where concepts may overlap or correlate within the representations yet remain linearly separable. Additionally, superposition, where multiple concepts are encoded within overlapping subspaces, might challenge strict independence as the sole criterion for detection. - Theorem 4 posits that “any fixed probing strategy with sample size *m*” cannot detect certain concepts with certainty, suggesting a no-free-lunch scenario. Can you clarify whether this limitation is due to concept complexity or dimensional constraints? - *Minor issues:* Finally, some minor typographical errors detract from readability, such as inconsistent hyphenation (e.g., “explainability methods” vs. “explain-ability methods”) and a misspelling in the abstract (“presensce” instead of “presence”). rating: 6 confidence: 3