|
--- |
|
dataset_info: |
|
features: |
|
- name: text |
|
dtype: string |
|
- name: label |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 146613669 |
|
num_examples: 2000 |
|
download_size: 67134534 |
|
dataset_size: 146613669 |
|
--- |
|
# ArXiv papers from The Pile for document-level MIAs against LLMs |
|
|
|
This dataset contains **full** ArXiv papers randomly sampled from the train (members) and test (non-members) dataset from (the uncopyrighted version of) [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted). |
|
We randomly sample 1,000 documents members and 1,000 non-members, ensuring that the selected documents have at least 5,000 words (any sequences of characters seperated by a white space). |
|
We also provide the dataset where each document is split into 25 sequences of 200 words [here](https://huggingface.co/datasets/imperial-cpg/pile_arxiv_doc_mia_sequences). |
|
|
|
The dataset contains as columns: |
|
- text: the raw text of the sequence |
|
- label: binary label for membership (1=member) |
|
|
|
The dataset can be used to develop and evaluate document-level MIAs against LLMs trained on The Pile. |
|
Target models include the suite of Pythia and GPTNeo models, to be found [here](https://huggingface.co/EleutherAI). Our understanding is that the deduplication executed on the Pile to create the "Pythia-dedup" models has been only done on the training dataset, suggesting this dataset of members/non-members also to be valid for these models. |
|
|
|
For more information we refer to [the paper](https://arxiv.org/pdf/2406.17975). |
|
|