eminorhan commited on
Commit
a0ce750
1 Parent(s): f02b71f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -1
README.md CHANGED
@@ -6,10 +6,21 @@ language:
6
  - en
7
  size_categories:
8
  - 10M<n<100M
 
 
 
 
 
9
  ---
10
 
11
  **Description of the dataset**
12
 
13
  This is the November 16, 2023 snapshot of the English subset of the Project Gutenberg corpus (containing 56712 documents in total), downloaded and preprocessed with code from [this repository](https://github.com/eminorhan/gutenberg).
14
 
15
- The data is divided into ~14.2M records consisting of a few paragraph long chunks of text (each chunk is at least 1024 chars long) and the corresponding metadata info. This is ideal for generating fine-grained embeddings of the texts.
 
 
 
 
 
 
 
6
  - en
7
  size_categories:
8
  - 10M<n<100M
9
+ configs:
10
+ - config_name: chunk_size_1024
11
+ data_files: "gutenberg_en_paragraph_1024.jsonl"
12
+ - config_name: chunk_size_2048
13
+ data_files: "gutenberg_en_paragraph_2048.jsonl"
14
  ---
15
 
16
  **Description of the dataset**
17
 
18
  This is the November 16, 2023 snapshot of the English subset of the Project Gutenberg corpus (containing 56712 documents in total), downloaded and preprocessed with code from [this repository](https://github.com/eminorhan/gutenberg).
19
 
20
+ Two different versions of the data are provided:
21
+
22
+ * The `chunk_size_1024` version divides the data into ~14.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 1024 chars long, and the corresponding metadata info.
23
+
24
+ * The `chunk_size_2048` version divides the data into ~8.2M records consisting of a few paragraph long chunks of text, where each chunk is at least 2048 chars long, and the corresponding metadata info.
25
+
26
+ This dataset is ideal for generating fine-grained embeddings of the documents.