matthieumeeus97
commited on
Commit
•
10048f4
1
Parent(s):
9ce1d3b
Update README.md
Browse files
README.md
CHANGED
@@ -15,14 +15,13 @@ dataset_info:
|
|
15 |
dataset_size: 76605258
|
16 |
---
|
17 |
|
18 |
-
# ArXiv papers from The Pile for document-level
|
19 |
|
20 |
-
This dataset contains **sequences
|
21 |
-
We randomly sample 1,000 documents
|
22 |
-
|
23 |
|
24 |
-
|
|
|
25 |
|
26 |
-
|
27 |
-
|
28 |
-
Note: our understanding is that the deduplication executed on the Pile to create the "Pythia-dedup" models has been only done on the training dataset, suggesting this dataset of members/non-members also to be valid for these models.
|
|
|
15 |
dataset_size: 76605258
|
16 |
---
|
17 |
|
18 |
+
# ArXiv papers from The Pile for document-level MIAs against for LLMs (split into sequences)
|
19 |
|
20 |
+
This dataset contains **sequences from** ArXiv papers randomly sampled from the train (members) and test (non-members) dataset from (the uncopyrighted version of) [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted).
|
21 |
+
We randomly sample 1,000 documents members and 1,000 non-members, ensuring that the selected documents have at least 5,000 words (any sequences of characters seperated by a white space).
|
22 |
+
This dataset contains the first 25 sequences of 200 words from all the documents made available in full [here](https://huggingface.co/datasets/imperial-cpg/pile_arxiv_doc_mia).
|
23 |
|
24 |
+
The dataset can be used to develop and evaluate document-level MIAs against LLMs trained on The Pile.
|
25 |
+
Target models include the suite of Pythia and GPTNeo models, to be found [here](https://huggingface.co/EleutherAI). Our understanding is that the deduplication executed on the Pile to create the "Pythia-dedup" models has been only done on the training dataset, suggesting this dataset of members/non-members also to be valid for these models.
|
26 |
|
27 |
+
For more information we refer to [the paper](https://arxiv.org/pdf/2406.17975).
|
|
|
|