pratyushmaini commited on
Commit
cd2e854
·
verified ·
1 Parent(s): e402158

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -0
app.py CHANGED
@@ -54,6 +54,8 @@ with gr.Blocks() as demo:
54
 
55
  </div>
56
 
 
 
57
  ## Abstract
58
 
59
  Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major questions is whether these models "memorize" all their training data; or is their integration of many data sources more akin to how a human would learn and synthesize information? The answer hinges, to a large degree, on how we define memorization. In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs—a given string from the training data is considered memorized if it can be elicited by a prompt shorter than the string itself. In other words, these strings can be "compressed" with the model by computing adversarial prompts of fewer tokens. We outline the limitations of existing notions of memorization and show how the ACR overcomes these challenges by (i) offering an adversarial view to measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute. Our definition serves as a valuable and practical tool for determining when model owners may be violating terms around data usage, providing a potential legal tool and a critical lens through which to address such scenarios.
 
54
 
55
  </div>
56
 
57
+ ## Paper: [https://arxiv.org/abs/2404.15146](https://arxiv.org/abs/2404.15146)
58
+
59
  ## Abstract
60
 
61
  Large language models (LLMs) trained on web-scale datasets raise substantial concerns regarding permissible data usage. One major questions is whether these models "memorize" all their training data; or is their integration of many data sources more akin to how a human would learn and synthesize information? The answer hinges, to a large degree, on how we define memorization. In this work, we propose the Adversarial Compression Ratio (ACR) as a metric for assessing memorization in LLMs—a given string from the training data is considered memorized if it can be elicited by a prompt shorter than the string itself. In other words, these strings can be "compressed" with the model by computing adversarial prompts of fewer tokens. We outline the limitations of existing notions of memorization and show how the ACR overcomes these challenges by (i) offering an adversarial view to measuring memorization, especially for monitoring unlearning and compliance; and (ii) allowing for the flexibility to measure memorization for arbitrary strings at a reasonably low compute. Our definition serves as a valuable and practical tool for determining when model owners may be violating terms around data usage, providing a potential legal tool and a critical lens through which to address such scenarios.