Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
matthieumeeus97 commited on
Commit
d2acf99
·
verified ·
1 Parent(s): 5c8396f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -3
README.md CHANGED
@@ -1,3 +1,43 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 1K<n<10K
9
+ ---
10
+
11
+ # Book from Project Gutenberg (after PG-19)
12
+
13
+ This dataset contains 9,542 books collected from [Project Gutenberg](https://www.gutenberg.org/), an online library for free e-books.
14
+
15
+ Specifically, we collect books that have been added to Project Gutenberg after the last book
16
+ that has been included in the widely used [PG-19 dataset](https://huggingface.co/datasets/deepmind/pg19). Of all books included in PG-19, the
17
+ latest release date on Project Gutenberg was February 10, 2019. We use an [open source library](https://github.com/kpully/gutenberg_scraper) to
18
+ download all English books that were added to Project Gutenberg after this date (and adapt the code [here](https://github.com/computationalprivacy/document-level-membership-inference/tree/main/data/raw_gutenberg)).
19
+ As preprocessing, we only consider the text between the explicit start and end of the uniformly formatted text files.
20
+
21
+ This data has been collected as part of the experimental setup of the paper *"Did the Neurons Read your Book? Document-level Membership Inference for Large Language Models"* ([link](https://arxiv.org/pdf/2310.15007)).
22
+ The goal was to create a dataset containing representative *non-member* documents compared to PG-19 to develop and
23
+ evaluate a Membership Inference Attack (MIA) against a Large Language Model (LLM) trained on data containing PG-19.
24
+
25
+ We here release the data we have used to generate the results discussed in the paper, mainly to facilitate further research in similar directions.
26
+ Importantly, research beyond the study in the paper ([here](https://arxiv.org/pdf/2406.17975)) suggests that this dataset exhibits a serious
27
+ distribution shift in language compared to books in PG19.
28
+ Hence, it is not recommended to use this data -at least not in its current form- as non-member data to develop and evaluate post-hoc MIAs against LLMs.
29
+
30
+ Of course, the dataset also represents a rich source of natural language from the literature -most of which should be in the public domain in the US-
31
+ and could also be used for other purposes.
32
+
33
+ If you found this dataset helpful for your work, kindly cite us as
34
+
35
+ ```
36
+ @article{meeus2023did,
37
+ title={Did the neurons read your book? document-level membership inference for large language models},
38
+ author={Meeus, Matthieu and Jain, Shubham and Rei, Marek and de Montjoye, Yves-Alexandre},
39
+ journal={arXiv preprint arXiv:2310.15007},
40
+ year={2023}
41
+ }
42
+ ```
43
+