Datasets:

ArXiv:
License:
tim-essential commited on
Commit
9c49650
·
1 Parent(s): c673e60

Upload Essential-Web v1.0

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. README.md +589 -3
  2. data/crawl=CC-MAIN-2013-20/train-00000-of-04233.parquet +3 -0
  3. data/crawl=CC-MAIN-2013-20/train-00001-of-04233.parquet +3 -0
  4. data/crawl=CC-MAIN-2013-20/train-00002-of-04233.parquet +3 -0
  5. data/crawl=CC-MAIN-2013-20/train-00003-of-04233.parquet +3 -0
  6. data/crawl=CC-MAIN-2013-20/train-00004-of-04233.parquet +3 -0
  7. data/crawl=CC-MAIN-2013-20/train-00005-of-04233.parquet +3 -0
  8. data/crawl=CC-MAIN-2013-20/train-00006-of-04233.parquet +3 -0
  9. data/crawl=CC-MAIN-2013-20/train-00007-of-04233.parquet +3 -0
  10. data/crawl=CC-MAIN-2013-20/train-00008-of-04233.parquet +3 -0
  11. data/crawl=CC-MAIN-2013-20/train-00009-of-04233.parquet +3 -0
  12. data/crawl=CC-MAIN-2013-20/train-00010-of-04233.parquet +3 -0
  13. data/crawl=CC-MAIN-2013-20/train-00011-of-04233.parquet +3 -0
  14. data/crawl=CC-MAIN-2013-20/train-00012-of-04233.parquet +3 -0
  15. data/crawl=CC-MAIN-2013-20/train-00013-of-04233.parquet +3 -0
  16. data/crawl=CC-MAIN-2013-20/train-00014-of-04233.parquet +3 -0
  17. data/crawl=CC-MAIN-2013-20/train-00015-of-04233.parquet +3 -0
  18. data/crawl=CC-MAIN-2013-20/train-00016-of-04233.parquet +3 -0
  19. data/crawl=CC-MAIN-2013-20/train-00017-of-04233.parquet +3 -0
  20. data/crawl=CC-MAIN-2013-20/train-00018-of-04233.parquet +3 -0
  21. data/crawl=CC-MAIN-2013-20/train-00019-of-04233.parquet +3 -0
  22. data/crawl=CC-MAIN-2013-20/train-00020-of-04233.parquet +3 -0
  23. data/crawl=CC-MAIN-2013-20/train-00021-of-04233.parquet +3 -0
  24. data/crawl=CC-MAIN-2013-20/train-00022-of-04233.parquet +3 -0
  25. data/crawl=CC-MAIN-2013-20/train-00023-of-04233.parquet +3 -0
  26. data/crawl=CC-MAIN-2013-20/train-00024-of-04233.parquet +3 -0
  27. data/crawl=CC-MAIN-2013-20/train-00025-of-04233.parquet +3 -0
  28. data/crawl=CC-MAIN-2013-20/train-00026-of-04233.parquet +3 -0
  29. data/crawl=CC-MAIN-2013-20/train-00027-of-04233.parquet +3 -0
  30. data/crawl=CC-MAIN-2013-20/train-00028-of-04233.parquet +3 -0
  31. data/crawl=CC-MAIN-2013-20/train-00029-of-04233.parquet +3 -0
  32. data/crawl=CC-MAIN-2013-20/train-00030-of-04233.parquet +3 -0
  33. data/crawl=CC-MAIN-2013-20/train-00031-of-04233.parquet +3 -0
  34. data/crawl=CC-MAIN-2013-20/train-00032-of-04233.parquet +3 -0
  35. data/crawl=CC-MAIN-2013-20/train-00033-of-04233.parquet +3 -0
  36. data/crawl=CC-MAIN-2013-20/train-00034-of-04233.parquet +3 -0
  37. data/crawl=CC-MAIN-2013-20/train-00035-of-04233.parquet +3 -0
  38. data/crawl=CC-MAIN-2013-20/train-00036-of-04233.parquet +3 -0
  39. data/crawl=CC-MAIN-2013-20/train-00037-of-04233.parquet +3 -0
  40. data/crawl=CC-MAIN-2013-20/train-00038-of-04233.parquet +3 -0
  41. data/crawl=CC-MAIN-2013-20/train-00039-of-04233.parquet +3 -0
  42. data/crawl=CC-MAIN-2013-20/train-00040-of-04233.parquet +3 -0
  43. data/crawl=CC-MAIN-2013-20/train-00041-of-04233.parquet +3 -0
  44. data/crawl=CC-MAIN-2013-20/train-00042-of-04233.parquet +3 -0
  45. data/crawl=CC-MAIN-2013-20/train-00043-of-04233.parquet +3 -0
  46. data/crawl=CC-MAIN-2013-20/train-00044-of-04233.parquet +3 -0
  47. data/crawl=CC-MAIN-2013-20/train-00045-of-04233.parquet +3 -0
  48. data/crawl=CC-MAIN-2013-20/train-00046-of-04233.parquet +3 -0
  49. data/crawl=CC-MAIN-2013-20/train-00047-of-04233.parquet +3 -0
  50. data/crawl=CC-MAIN-2013-20/train-00048-of-04233.parquet +3 -0
README.md CHANGED
@@ -1,3 +1,589 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ size_categories:
4
+ - n>1T
5
+ ---
6
+ # 🌐 Essential-Web: Complete 24-Trillion Token Dataset (Upload in-progress...)
7
+
8
+ [🏆 Website](https://www.essential.ai/) | [🖥️ Code](https://github.com/Essential-AI/eai-taxonomy) | [📖 Paper](https://huggingface.co/papers/2506.14111)
9
+
10
+ ## 📋 Dataset Description
11
+
12
+ Essential-Web is a 24-trillion-token web dataset with document-level metadata designed for flexible dataset curation. The dataset provides metadata including subject matter classification, web page type, content complexity, and document quality scores for each of the 23.6 billion documents.
13
+
14
+ Researchers can filter and curate specialized datasets using the provided metadata, reducing the need for custom preprocessing pipelines and domain-specific classifiers.
15
+
16
+ ## 🔍 Free Decimal Correspondence (FDC) Taxonomy
17
+
18
+ Essential-Web uses the Free Decimal Correspondence, a Dewey Decimal-inspired open taxonomy with 12 main categories for classifying web content. This systematic approach enables precise domain filtering and dataset curation.
19
+
20
+ For help navigating FDC codes, see: https://www.librarything.com/mds
21
+
22
+ ## ⚙️ Dataset Creation
23
+
24
+ Essential-Web was created using a comprehensive processing pipeline starting from Common Crawl data:
25
+
26
+ ### 📥 Source Data
27
+ - **DCLM Pool**: 89 resiliparse-extracted Common Crawl WARC snapshots (CC-MAIN-2013-20 to CC-MAIN-2022-49)
28
+ - **Additional Snapshots**: 12 additional snapshots extracted from CC-MAIN-2023-06 to CC-MAIN-2024-38 using resiliparse
29
+ - **Total**: 101 Common Crawl snapshots processed
30
+
31
+ ### 🔧 Processing Pipeline
32
+ 1. **Document ID Generation**: Using xxhash.xxh3_64_intdigest for unique document identification
33
+ 2. **Global Deduplication**: Hash-based deduplication across all 101 snapshots
34
+ 3. **Minhash LSH Deduplication**: Snapshot-level deduplication with Jaccard threshold of 0.7 (14 bands, 9 rows per band)
35
+ 4. **Quality Annotation**: Statistical and model-based quality signals using RedPajama-Data-V2 pipeline variant, including DCLM-baseline fastText classifier
36
+ 5. **Quality Filtering**: Manual tuned filters to retain high-quality English documents while preserving math and code content
37
+ 6. **Taxonomy Labeling**: Classification of every document using EAI-Taxonomy-0.5b (~90,000 AMD MI300x GPU-hours)
38
+
39
+ ## 🎯 Performance & Validation
40
+
41
+ We've curated example domain-specific datasets from Essential-Web using simple metadata filters, showing competitive performance relative to top performing web-curated datasets:
42
+
43
+ - 🧮 **Math**: within 8.0% of web-curated baselines
44
+ - 💻 **Web Code**: 14.3% above web-curated baselines
45
+ - 🔬 **STEM**: 24.5% above web-curated baselines
46
+ - 🩺 **Medical**: 8.6% above web-curated baselines
47
+
48
+ *Note: These represent initial examples with significant room for further curation and improvement. Comparisons are against web-sourced datasets rather than specialized synthetic datasets.*
49
+
50
+ ## 🚀 Related Datasets & Models
51
+
52
+ ### Domain-Specific Datasets
53
+ We've curated high-quality domain-specific datasets from Essential-Web:
54
+
55
+ - **Math**: [EssentialAI/eai-taxonomy-math-w-fm](https://huggingface.co/datasets/EssentialAI/eai-taxonomy-math-w-fm)
56
+ - **Code**: [EssentialAI/eai-taxonomy-code-w-dclm](https://huggingface.co/datasets/EssentialAI/eai-taxonomy-code-w-dclm)
57
+ - **Medical**: [EssentialAI/eai-taxonomy-med-w-dclm](https://huggingface.co/datasets/EssentialAI/eai-taxonomy-med-w-dclm)
58
+ - **STEM**: [EssentialAI/eai-taxonomy-stem-w-dclm](https://huggingface.co/datasets/EssentialAI/eai-taxonomy-stem-w-dclm)
59
+
60
+ ### Classification Model
61
+ - **EAI-Taxonomy-0.5b**: [EssentialAI/eai-taxonomy-0.5b](https://huggingface.co/EssentialAI/eai-taxonomy-0.5b) - The efficient classifier used to label Essential-Web documents
62
+
63
+ ## 🎯 Intended Use
64
+
65
+ Essential-Web enables researchers to:
66
+ - 🚀 **Rapid Curation**: Create multi-billion-token domain-specific datasets in minutes using SQL-like filters
67
+ - 🔍 **Flexible Exploration**: Explore web content across subjects, quality levels, and content types
68
+ - 🏗️ **Custom Pipelines**: Build specialized training corpora without custom classification infrastructure
69
+ - 🔄 **Iterative Improvement**: Easily modify and refine dataset composition based on training results
70
+ - 📊 **Quality Control**: Filter out low-quality content (ads, product listings) while preserving reasoning-dense documents
71
+
72
+ # Dataset Schema Documentation
73
+
74
+ ## Overview
75
+
76
+ This dataset contains web-crawled text data with comprehensive metadata, quality signals, and taxonomic classifications. Each record represents a document extracted from web archives with detailed provenance tracking and quality assessment metrics.
77
+
78
+ ## Core Fields
79
+
80
+ | Field | Type | Description | Path |
81
+ |-------|------|-------------|------|
82
+ | `id` | `Int64` | Unique identifier based on document hash | `id` |
83
+ | `text` | `String` | The main textual content of the document | `text` |
84
+
85
+ ## EAI Taxonomy Classification
86
+
87
+ Comprehensive hierarchical classification system with primary and secondary labels - the most important feature of this dataset. The taxonomy is designed to provide detailed subject categorization, document type identification, content quality assessment, and extraction quality indicators.
88
+
89
+ <details>
90
+ <summary><strong>Free Decimal Correspondence (FDC)</strong></summary>
91
+
92
+ A Dewey Decimal-inspired classification system with 3-level hierarchical labels. The FDC provides nested categories where each successive level refines its parent category. It's designed to be compatible with the Dewey Decimal System for library cataloging.
93
+
94
+ **Level Structure:**
95
+ - **Level 1**: Top-level categories (0-9) covering broad subject areas like General works, Philosophy, Religion, Social Sciences, etc.
96
+ - **Level 2**: Sub-divisions (00-99) that refine Level 1 categories
97
+ - **Level 3**: Specific categories (000-999) that further refine Level 2 categories
98
+
99
+ | Component | Description | Path |
100
+ |-----------|-------------|------|
101
+ | Primary Code | Main classification code | `eai_taxonomy.free_decimal_correspondence.primary.code` |
102
+ | Primary Level 1 | Top-level category (0=General works, 1=Philosophy, 2=Religion, 3=Social Sciences, 4=Language, 5=Science, 6=Technology, 7=Arts, 8=Literature, 9=History/Geography) | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_1` |
103
+ | Primary Level 2 | Mid-level category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_2` |
104
+ | Primary Level 3 | Specific category | `eai_taxonomy.free_decimal_correspondence.primary.labels.level_3` |
105
+ | Secondary Code | Alternative classification code | `eai_taxonomy.free_decimal_correspondence.secondary.code` |
106
+ | Secondary Level 1 | Alternative top-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_1` |
107
+ | Secondary Level 2 | Alternative mid-level category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_2` |
108
+ | Secondary Level 3 | Alternative specific category | `eai_taxonomy.free_decimal_correspondence.secondary.labels.level_3` |
109
+
110
+ We recommend this viewer for easily navigating the FDC categories when curating filters: https://www.librarything.com/mds
111
+
112
+ </details>
113
+
114
+ <details>
115
+ <summary><strong>Bloom's Taxonomy Integration</strong></summary>
116
+
117
+ Based on Anderson and Krathwohl's 2001 revision of Bloom's Taxonomy of Educational Objectives, providing two complementary categorization dimensions for educational content analysis.
118
+
119
+ ### Knowledge Domain
120
+ Categorizes the type of knowledge demonstrated in the document:
121
+
122
+ | Component | Description | Path |
123
+ |-----------|-------------|------|
124
+ | Primary Code | Main knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.primary.code` |
125
+ | Primary Label | Main knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.primary.label` |
126
+ | Secondary Code | Alternative knowledge domain code | `eai_taxonomy.bloom_knowledge_domain.secondary.code` |
127
+ | Secondary Label | Alternative knowledge domain label | `eai_taxonomy.bloom_knowledge_domain.secondary.label` |
128
+
129
+ **Possible Values:**
130
+ | Code | Label | Description |
131
+ |------|-------|-------------|
132
+ | `-1` | Abstain | Unable to determine |
133
+ | `1` | Factual | Basic elements to learn or solve problems |
134
+ | `2` | Conceptual | Interrelationships between basic elements within larger context |
135
+ | `3` | Procedural | Methods and techniques in the discipline |
136
+ | `4` | Metacognitive | Awareness of how learning works in relation to oneself |
137
+
138
+ ### Cognitive Processing Level
139
+ Assesses the learning and thinking skill levels demonstrated by the document author:
140
+
141
+ | Component | Description | Path |
142
+ |-----------|-------------|------|
143
+ | Primary Code | Main cognitive process code | `eai_taxonomy.bloom_cognitive_process.primary.code` |
144
+ | Primary Label | Main cognitive process label | `eai_taxonomy.bloom_cognitive_process.primary.label` |
145
+ | Secondary Code | Alternative cognitive process code | `eai_taxonomy.bloom_cognitive_process.secondary.code` |
146
+ | Secondary Label | Alternative cognitive process label | `eai_taxonomy.bloom_cognitive_process.secondary.label` |
147
+
148
+ **Possible Values:**
149
+ | Code | Label | Description |
150
+ |------|-------|-------------|
151
+ | `-1` | Abstain | Unable to determine |
152
+ | `1` | Remember | Retrieve relevant knowledge from memory |
153
+ | `2` | Understand | Determine meaning of instructional messages |
154
+ | `3` | Apply | Use a procedure in a given situation |
155
+ | `4` | Analyze | Break materials into components and determine relationships |
156
+ | `5` | Evaluate | Make judgments based on criteria and standards |
157
+ | `6` | Create | Create new or original work |
158
+
159
+ </details>
160
+
161
+ <details>
162
+ <summary><strong>Document Characteristics</strong></summary>
163
+
164
+ ### Document Type v1
165
+ In-house classification of common web document types and formats:
166
+
167
+ | Component | Description | Path |
168
+ |-----------|-------------|------|
169
+ | Primary Code | Main document type code | `eai_taxonomy.document_type_v1.primary.code` |
170
+ | Primary Label | Main document type label | `eai_taxonomy.document_type_v1.primary.label` |
171
+ | Secondary Code | Alternative document type code | `eai_taxonomy.document_type_v1.secondary.code` |
172
+ | Secondary Label | Alternative document type label | `eai_taxonomy.document_type_v1.secondary.label` |
173
+
174
+ **Possible Values:**
175
+ | Code | Label | Examples |
176
+ |------|-------|----------|
177
+ | `-1` | Abstain | Unable to classify |
178
+ | `1` | News/Editorial | CNN articles, opinion columns |
179
+ | `2` | Academic/Research | ArXiv papers, research articles |
180
+ | `3` | Reference/Encyclopedic/Educational | FAQs, Wikipedia entries |
181
+ | `4` | Code/Software | GitHub repos, code examples |
182
+ | `5` | Social/Forum | Conversation threads, Q&A boards |
183
+ | `6` | Promotional/Advertisement | Product pages, calls to action |
184
+ | `7` | Search/Directory/Bibliography | Link pages, search results |
185
+ | `8` | Adult/Pornographic | Adult content |
186
+ | `9` | Personal/Misc | Blogs, user profiles |
187
+ | `10` | Machine-Generated | Lorem ipsum, garbled text |
188
+ | `11` | Legal/Regulatory | Contracts, terms of service |
189
+ | `12` | Government/Political | Legislation, press releases |
190
+ | `13` | Literary/Creative | Poems, short stories |
191
+ | `14` | Reviews/Critiques | Film critiques, product reviews |
192
+ | `15` | E-Commerce/Marketplace | eBay listings, Amazon pages |
193
+ | `16` | Images/Videos/Audio | YouTube videos, Imgur pages |
194
+ | `17` | Other/Unclassified | Documents that resist classification |
195
+
196
+ ### Document Type v2
197
+ Updated classification based on WebOrganizer taxonomy with refined categories for improved document classification accuracy:
198
+
199
+ | Component | Description | Path |
200
+ |-----------|-------------|------|
201
+ | Primary Code | Main document type code (v2) | `eai_taxonomy.document_type_v2.primary.code` |
202
+ | Primary Label | Main document type label (v2) | `eai_taxonomy.document_type_v2.primary.label` |
203
+ | Secondary Code | Alternative document type code (v2) | `eai_taxonomy.document_type_v2.secondary.code` |
204
+ | Secondary Label | Alternative document type label (v2) | `eai_taxonomy.document_type_v2.secondary.label` |
205
+
206
+ **Complete Value Mapping:**
207
+ | Code | Label | Examples |
208
+ |------|-------|----------|
209
+ | `-1` | Abstain | Documents requiring human review |
210
+ | `1` | About (Org.) | Company about pages, mission statements |
211
+ | `2` | About (Personal) | Personal bios, LinkedIn profiles |
212
+ | `3` | Academic Writing | Research papers, abstracts, dissertations |
213
+ | `4` | Audio Transcript | Interview transcripts, court records, captions |
214
+ | `5` | Comment Section | Reddit threads, blog comments |
215
+ | `6` | Content Listing | Site maps, product catalogs, directory listings |
216
+ | `7` | Creative Writing | Song lyrics, novel excerpts, poetry |
217
+ | `8` | Documentation | API docs, README files, user manuals |
218
+ | `9` | FAQ | FAQ pages, Q&A lists |
219
+ | `10` | Knowledge Article | Wikipedia articles, Britannica entries |
220
+ | `11` | Legal Notices | Privacy policies, license agreements, terms of service |
221
+ | `12` | Listicle | Buzzfeed-style articles, "Top 10" lists |
222
+ | `13` | News (Org.) | Government blog posts, corporate announcements |
223
+ | `14` | News Article | Newspaper articles, CNN content, breaking news |
224
+ | `15` | Nonfiction Writing | Editorials, obituaries, memoirs, opinion pieces |
225
+ | `16` | Personal Blog | Personal journals, diary entries, lifestyle blogs |
226
+ | `17` | Product Page | Product descriptions, course offerings, sales pages |
227
+ | `18` | Q&A Forum | Quora posts, Stack Exchange discussions |
228
+ | `19` | Spam / Ads | SEO keyword stuffing, promotional spam |
229
+ | `20` | Structured Data | Datasheets, glossaries, JSON files, databases |
230
+ | `21` | Customer Support | Help articles, troubleshooting guides |
231
+ | `22` | Truncated | Paywalled sites, image galleries, partial content |
232
+ | `23` | Tutorial | Cooking recipes, WikiHow pages, step-by-step guides |
233
+ | `24` | User Review | Yelp reviews, TripAdvisor feedback, product reviews |
234
+ | `25` | Other/Unclassified | Miscellaneous documents not fitting other categories |
235
+
236
+ ### Extraction Artifacts
237
+ Assessment of technical extraction quality, identifying issues from HTML-to-text conversion:
238
+
239
+ | Component | Description | Path |
240
+ |-----------|-------------|------|
241
+ | Primary Code | Main extraction artifact code | `eai_taxonomy.extraction_artifacts.primary.code` |
242
+ | Primary Label | Main extraction artifact label | `eai_taxonomy.extraction_artifacts.primary.label` |
243
+ | Secondary Code | Alternative extraction artifact code | `eai_taxonomy.extraction_artifacts.secondary.code` |
244
+ | Secondary Label | Alternative extraction artifact label | `eai_taxonomy.extraction_artifacts.secondary.label` |
245
+
246
+ **Possible Values:**
247
+ | Code | Label | Description |
248
+ |------|-------|-------------|
249
+ | `-1` | Abstain | Unable to determine |
250
+ | `0` | No Artifacts | Clean text with no leftover HTML or irrelevant elements |
251
+ | `1` | Leftover HTML | HTML/code artifacts remaining after extraction |
252
+ | `2` | Text Extraction Errors | Broken math expressions, encoding errors, improperly parsed tables |
253
+ | `3` | Irrelevant Content | Headers, footers, nav menus extracted by mistake |
254
+ | `4` | Indeterminate | Insufficient content to judge |
255
+
256
+ ### Missing Content
257
+ Assessment of content completeness and extraction success:
258
+
259
+ | Component | Description | Path |
260
+ |-----------|-------------|------|
261
+ | Primary Code | Main missing content code | `eai_taxonomy.missing_content.primary.code` |
262
+ | Primary Label | Main missing content label | `eai_taxonomy.missing_content.primary.label` |
263
+ | Secondary Code | Alternative missing content code | `eai_taxonomy.missing_content.secondary.code` |
264
+ | Secondary Label | Alternative missing content label | `eai_taxonomy.missing_content.secondary.label` |
265
+
266
+ **Possible Values:**
267
+ | Code | Label | Description |
268
+ |------|-------|-------------|
269
+ | `-1` | Abstain | Unable to determine |
270
+ | `0` | No Missing Content | Complete and coherent text |
271
+ | `1` | Truncated Snippets | Obvious "...", incomplete paragraphs, cut-off text |
272
+ | `2` | Click Here References | "Download here", "Click here" without linked content |
273
+ | `3` | Incoherent Flow | Unreadable or illogical flow due to missing context |
274
+ | `4` | Missing Images or Figures | Placeholders or references to missing visual content |
275
+ | `5` | Missing Referenced Data | References to absent tables/datasets (e.g., "See Table 3") |
276
+ | `6` | Indeterminate | Insufficient content to judge |
277
+
278
+ ### Text Structure Information
279
+
280
+ | Field | Type | Description | Path |
281
+ |-------|------|-------------|------|
282
+ | Line Start Indices | `List[Int32]` | Starting indices of each line | `line_start_n_end_idx.line_start_idx` |
283
+ | Line End Indices | `List[Int32]` | Ending indices of each line | `line_start_n_end_idx.line_end_idx` |
284
+
285
+ </details>
286
+
287
+ <details>
288
+ <summary><strong>Content Quality Dimensions</strong></summary>
289
+
290
+ Quality assessment inspired by NaturalReasoning and FineWeb efforts to categorize web data by information sophistication.
291
+
292
+ ### Reasoning Depth
293
+ Assesses the complexity and sophistication of logical reasoning in the document:
294
+
295
+ | Component | Description | Path |
296
+ |-----------|-------------|------|
297
+ | Primary Code | Main reasoning depth code | `eai_taxonomy.reasoning_depth.primary.code` |
298
+ | Primary Label | Main reasoning depth label | `eai_taxonomy.reasoning_depth.primary.label` |
299
+ | Secondary Code | Alternative reasoning depth code | `eai_taxonomy.reasoning_depth.secondary.code` |
300
+ | Secondary Label | Alternative reasoning depth label | `eai_taxonomy.reasoning_depth.secondary.label` |
301
+
302
+ **Possible Values:**
303
+ | Code | Label | Description |
304
+ |------|-------|-------------|
305
+ | `-1` | Abstain | Unable to determine |
306
+ | `1` | No Reasoning | Facts present but no evidence of reasoning |
307
+ | `2` | Basic Reasoning | Basic analysis with minimal explanation and summarization |
308
+ | `3` | Intermediate Reasoning | Some logical steps connecting ideas and structured thinking |
309
+ | `4` | Advanced Reasoning | Multi-step reasoning and thorough analysis with well-developed explanations |
310
+ | `5` | Exceptional Reasoning | Novel abstractions, theoretical frameworks, long chain-of-thought, original insights, or proofs |
311
+ | `6` | Indeterminate | Insufficient context to judge |
312
+
313
+ ### Technical Correctness
314
+ Evaluates the accuracy and precision of technical information:
315
+
316
+ | Component | Description | Path |
317
+ |-----------|-------------|------|
318
+ | Primary Code | Main technical correctness code | `eai_taxonomy.technical_correctness.primary.code` |
319
+ | Primary Label | Main technical correctness label | `eai_taxonomy.technical_correctness.primary.label` |
320
+ | Secondary Code | Alternative technical correctness code | `eai_taxonomy.technical_correctness.secondary.code` |
321
+ | Secondary Label | Alternative technical correctness label | `eai_taxonomy.technical_correctness.secondary.label` |
322
+
323
+ **Possible Values:**
324
+ | Code | Label | Description |
325
+ |------|-------|-------------|
326
+ | `-1` | Abstain | Unable to determine |
327
+ | `1` | Technically Flawed | Significant errors undermining content validity |
328
+ | `2` | Partially Correct | Some correctness but contains flaws, omissions, or errors |
329
+ | `3` | Mostly Correct | Technical correctness with minor flaws or incomplete explanations |
330
+ | `4` | Highly Correct | High technical correctness with precise definitions and clear explanations |
331
+ | `5` | Exceptionally Correct | Exceptional technical correctness with formal proofs and flawless content |
332
+ | `6` | Not Applicable/Indeterminate | No technical content or insufficient context |
333
+
334
+ ### Education Level
335
+ Assesses the appropriate educational background required to comprehend the content:
336
+
337
+ | Component | Description | Path |
338
+ |-----------|-------------|------|
339
+ | Primary Code | Main education level code | `eai_taxonomy.education_level.primary.code` |
340
+ | Primary Label | Main education level label | `eai_taxonomy.education_level.primary.label` |
341
+ | Secondary Code | Alternative education level code | `eai_taxonomy.education_level.secondary.code` |
342
+ | Secondary Label | Alternative education level label | `eai_taxonomy.education_level.secondary.label` |
343
+
344
+ **Possible Values:**
345
+ | Code | Label | Description |
346
+ |------|-------|-------------|
347
+ | `-1` | Abstain | Unable to determine |
348
+ | `1` | General Audience | Accessible to anyone with basic literacy; simple terms |
349
+ | `2` | High School Level | Requires high school education; specialized terminology explained for non-experts |
350
+ | `3` | Undergraduate Level | Requires college education; uses specialized terminology and assumes background knowledge |
351
+ | `4` | Graduate/Expert Level | Requires graduate education or domain expertise; assumes deep background knowledge |
352
+ | `5` | Indeterminate | Insufficient content to judge educational level |
353
+
354
+ </details>
355
+
356
+ <details>
357
+ <summary><strong>Metadata</strong></summary>
358
+
359
+ ## Metadata Structure
360
+
361
+ The `metadata` field contains a nested structure with web archive information:
362
+
363
+ | Field | Type | Description | Path |
364
+ |-------|------|-------------|------|
365
+ | **URL Information** | | | |
366
+ | URL | `String` | Original URL of the document | `metadata.url` |
367
+ | Source Domain | `String` | Domain name of the source | `metadata.source_domain` |
368
+ | Snapshot ID | `String` | Identifier for the web archive snapshot | `metadata.snapshot_id` |
369
+ | **WARC Metadata** | | WARC (Web ARChive) format metadata | |
370
+ | Content Length | `String` | Size of the content | `metadata.warc_metadata.Content-Length` |
371
+ | Content Type | `String` | MIME type of the content | `metadata.warc_metadata.Content-Type` |
372
+ | Block Digest | `String` | Checksum of the WARC block | `metadata.warc_metadata.WARC-Block-Digest` |
373
+ | Concurrent To | `String` | Related WARC records | `metadata.warc_metadata.WARC-Concurrent-To` |
374
+ | Date | `String` | Timestamp of the crawl | `metadata.warc_metadata.WARC-Date` |
375
+ | IP Address | `String` | Source server IP address | `metadata.warc_metadata.WARC-IP-Address` |
376
+ | Payload Type | `String` | Identified content type | `metadata.warc_metadata.WARC-Identified-Payload-Type` |
377
+ | Payload Digest | `String` | Checksum of the payload | `metadata.warc_metadata.WARC-Payload-Digest` |
378
+ | Record ID | `String` | Unique WARC record identifier | `metadata.warc_metadata.WARC-Record-ID` |
379
+ | Target URI | `String` | Original target URL | `metadata.warc_metadata.WARC-Target-URI` |
380
+ | Truncated | `String` | Truncation status | `metadata.warc_metadata.WARC-Truncated` |
381
+ | Type | `String` | WARC record type | `metadata.warc_metadata.WARC-Type` |
382
+ | Warcinfo ID | `String` | Associated warcinfo record | `metadata.warc_metadata.WARC-Warcinfo-ID` |
383
+ | **Additional Info** | | | |
384
+ | WARC Info | `String` | Additional WARC information | `metadata.warc_info` |
385
+
386
+ </details>
387
+
388
+ <details>
389
+ <summary><strong>Quality Signals</strong></summary>
390
+
391
+ The dataset includes two comprehensive quality assessment frameworks:
392
+
393
+ ## Red Pajama v2 Quality Metrics
394
+
395
+ Text quality indicators derived from the Red Pajama v2 filtering pipeline:
396
+
397
+ ### Content Structure Metrics
398
+ | Metric | Description | Path |
399
+ |--------|-------------|------|
400
+ | Original Length | Original document length | `quality_signals.red_pajama_v2.ccnet_original_length` |
401
+ | Original Lines | Number of lines in original document | `quality_signals.red_pajama_v2.ccnet_original_nlines` |
402
+ | Sentence Count | Total sentence count | `quality_signals.red_pajama_v2.rps_doc_num_sentences` |
403
+ | Word Count | Total word count | `quality_signals.red_pajama_v2.rps_doc_word_count` |
404
+ | Mean Word Length | Average word length | `quality_signals.red_pajama_v2.rps_doc_mean_word_length` |
405
+
406
+ ### Language Quality Metrics
407
+ | Metric | Description | Path |
408
+ |--------|-------------|------|
409
+ | Stop Word Fraction | Proportion of stop words | `quality_signals.red_pajama_v2.rps_doc_stop_word_fraction` |
410
+ | Unique Words Fraction | Fraction of unique words | `quality_signals.red_pajama_v2.rps_doc_frac_unique_words` |
411
+ | All Caps Words | Fraction of words in all capitals | `quality_signals.red_pajama_v2.rps_doc_frac_all_caps_words` |
412
+ | Non-Alphabetic Words | Fraction of non-alphabetic words | `quality_signals.red_pajama_v2.rps_doc_frac_no_alph_words` |
413
+ | Unigram Entropy | Entropy measure of word distribution | `quality_signals.red_pajama_v2.rps_doc_unigram_entropy` |
414
+
415
+ ### Content Pattern Analysis
416
+ | Metric | Description | Path |
417
+ |--------|-------------|------|
418
+ | Curly Bracket Density | Curly bracket density (code indicator) | `quality_signals.red_pajama_v2.rps_doc_curly_bracket` |
419
+ | Symbol-to-Word Ratio | Symbol-to-word ratio | `quality_signals.red_pajama_v2.rps_doc_symbol_to_word_ratio` |
420
+ | Ellipsis Line Endings | Lines ending with ellipsis | `quality_signals.red_pajama_v2.rps_doc_frac_lines_end_with_ellipsis` |
421
+ | Lorem Ipsum Detection | Lorem ipsum text detection | `quality_signals.red_pajama_v2.rps_doc_lorem_ipsum` |
422
+ | Offensive Content | Potentially offensive content detection | `quality_signals.red_pajama_v2.rps_doc_ldnoobw_words` |
423
+ | UT1 Blacklist | UT1 blacklist filtering score | `quality_signals.red_pajama_v2.rps_doc_ut1_blacklist` |
424
+
425
+ ### Duplication Detection
426
+ | Metric | Description | Path |
427
+ |--------|-------------|------|
428
+ | 5-gram Duplication | Character-level duplication for 5-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_5grams` |
429
+ | 6-gram Duplication | Character-level duplication for 6-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_6grams` |
430
+ | 7-gram Duplication | Character-level duplication for 7-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_7grams` |
431
+ | 8-gram Duplication | Character-level duplication for 8-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_8grams` |
432
+ | 9-gram Duplication | Character-level duplication for 9-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_9grams` |
433
+ | 10-gram Duplication | Character-level duplication for 10-grams | `quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_10grams` |
434
+ | Top 2-gram Coverage | Most frequent 2-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_2gram` |
435
+ | Top 3-gram Coverage | Most frequent 3-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_3gram` |
436
+ | Top 4-gram Coverage | Most frequent 4-gram coverage | `quality_signals.red_pajama_v2.rps_doc_frac_chars_top_4gram` |
437
+
438
+ ### Domain Importance Scores
439
+ | Metric | Description | Path |
440
+ |--------|-------------|------|
441
+ | Books Importance | Similarity to book content | `quality_signals.red_pajama_v2.rps_doc_books_importance` |
442
+ | Books Importance (Length Corrected) | Length-corrected books similarity | `quality_signals.red_pajama_v2.rps_doc_books_importance_length_correction` |
443
+ | OpenWebText Importance | Similarity to OpenWebText | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance` |
444
+ | OpenWebText Importance (Length Corrected) | Length-corrected OpenWebText similarity | `quality_signals.red_pajama_v2.rps_doc_openwebtext_importance_length_correction` |
445
+ | Wikipedia Importance | Similarity to Wikipedia | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance` |
446
+ | Wikipedia Importance (Length Corrected) | Length-corrected Wikipedia similarity | `quality_signals.red_pajama_v2.rps_doc_wikipedia_importance_length_correction` |
447
+
448
+ ## FastText Classification Scores
449
+
450
+ Domain and content type classification probabilities:
451
+
452
+ | Metric | Description | Path |
453
+ |--------|-------------|------|
454
+ | DCLM Score | DataComp-LM classifier score | `quality_signals.fasttext.dclm` |
455
+ | English Confidence | English language confidence | `quality_signals.fasttext.english` |
456
+ | Educational Content | Educational content approximation | `quality_signals.fasttext.fineweb_edu_approx` |
457
+ | General Math | General mathematics content | `quality_signals.fasttext.eai_general_math` |
458
+ | Web Math | OWM Web-based mathematics content | `quality_signals.fasttext.eai_open_web_math` |
459
+ | Code Content | Code content detection | `quality_signals.fasttext.eai_web_code` |
460
+
461
+ </details>
462
+
463
+ ## How to Load the Dataset
464
+
465
+ This section provides examples of how to load the `EssentialAI/essential-web-v1.0` dataset using different Python libraries and frameworks.
466
+
467
+ ### Using Hugging Face Datasets (Standard Method)
468
+
469
+ The simplest way to load the dataset is using the Hugging Face `datasets` library:
470
+
471
+ ```python
472
+ from datasets import load_dataset
473
+
474
+ # Load the entire dataset
475
+ dataset = load_dataset("EssentialAI/essential-web-v1.0")
476
+
477
+ # View dataset structure
478
+ print(dataset)
479
+ print(f"Number of examples: {len(dataset['train'])}")
480
+ ```
481
+
482
+ You can also load the dataset in streaming mode to avoid downloading the entire dataset at once:
483
+
484
+ ```python
485
+ from datasets import load_dataset
486
+
487
+ # Load in streaming mode
488
+ dataset = load_dataset("EssentialAI/essential-web-v1.0", streaming=True)
489
+ data_stream = dataset["train"]
490
+
491
+ # Iterate through examples
492
+ for example in data_stream.take(5):
493
+ print(example)
494
+ ```
495
+
496
+ ### Using PySpark
497
+
498
+ For large-scale distributed processing, you can load the dataset using PySpark with the `pyspark_huggingface` library:
499
+
500
+ ```python
501
+ # First install the required library:
502
+ # pip install pyspark_huggingface
503
+
504
+ import pyspark_huggingface
505
+ from pyspark.sql import SparkSession
506
+
507
+ # Initialize Spark session
508
+ spark = SparkSession.builder.appName("EAI-Taxonomy-Web").getOrCreate()
509
+
510
+ # Load the dataset using the "huggingface" data source
511
+ df = spark.read.format("huggingface").load("EssentialAI/essential-web-v1.0")
512
+
513
+ # Basic dataset exploration
514
+ print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
515
+ df.show(10)
516
+ df.printSchema()
517
+
518
+ # Load only specific columns for efficiency
519
+ df_subset = (
520
+ spark.read.format("huggingface")
521
+ .option("columns", '["column1", "column2"]') # Replace with actual column names
522
+ .load("EssentialAI/essential-web-v1.0")
523
+ )
524
+
525
+ # Run SQL queries on the dataset
526
+ df.createOrReplaceTempView("eai_web_dataset")
527
+ result = spark.sql("""
528
+ SELECT COUNT(*) as total_examples
529
+ FROM eai_web_dataset
530
+ """)
531
+ result.show()
532
+ ```
533
+
534
+ ### Using Daft
535
+
536
+ Daft provides a modern DataFrame library optimized for machine learning workloads. You can load the dataset directly from Hugging Face:
537
+
538
+ ```python
539
+ import daft
540
+
541
+ # Load the entire dataset
542
+ df = daft.read_parquet("hf://datasets/EssentialAI/essential-web-v1.0")
543
+
544
+ # Basic exploration
545
+ print("Dataset schema:")
546
+ df.schema()
547
+
548
+ print("First 5 rows:")
549
+ df.show(5)
550
+ ```
551
+
552
+ If you need to access private datasets or use authentication:
553
+
554
+ ```python
555
+ import daft
556
+ from daft.io import IOConfig, HTTPConfig
557
+
558
+ io_config = IOConfig(http=HTTPConfig(bearer_token="your_token"))
559
+ df = daft.read_parquet("hf://datasets/EssentialAI/essential-web-v1.0", io_config=io_config)
560
+ ```
561
+
562
+ ### Installation Requirements
563
+
564
+ Make sure you have the required libraries installed:
565
+
566
+ ```bash
567
+ # For Hugging Face datasets
568
+ pip install datasets
569
+
570
+ # For PySpark with Hugging Face integration
571
+ pip install pyspark_huggingface
572
+
573
+ # For Daft
574
+ pip install daft
575
+ ```
576
+
577
+ ## 📝 Citation
578
+
579
+ ```bibtex
580
+ @misc{ai2025essentialwebv1024ttokens,
581
+ title={Essential-Web v1.0: 24T tokens of organized web data},
582
+ author={Essential AI and : and Andrew Hojel and Michael Pust and Tim Romanski and Yash Vanjani and Ritvik Kapila and Mohit Parmar and Adarsh Chaluvaraju and Alok Tripathy and Anil Thomas and Ashish Tanwer and Darsh J Shah and Ishaan Shah and Karl Stratos and Khoi Nguyen and Kurt Smith and Michael Callahan and Peter Rushton and Philip Monk and Platon Mazarakis and Saad Jamal and Saurabh Srivastava and Somanshu Singla and Ashish Vaswani},
583
+ year={2025},
584
+ eprint={2506.14111},
585
+ archivePrefix={arXiv},
586
+ primaryClass={cs.CL},
587
+ url={https://arxiv.org/abs/2506.14111},
588
+ }
589
+ ```
data/crawl=CC-MAIN-2013-20/train-00000-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:abd33bec48c0e1ecd25be43ab3510849461c3aed524e18cb3fea00c885c2672d
3
+ size 245607213
data/crawl=CC-MAIN-2013-20/train-00001-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d35ab4c30d44dcb34983d5f11267ca2e6e017c8e8956d0df804b3a67c31137
3
+ size 270006221
data/crawl=CC-MAIN-2013-20/train-00002-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0df85f684847638e5572c3483ee4ef414e082edecdb6e6695283dc04e072ca47
3
+ size 270327538
data/crawl=CC-MAIN-2013-20/train-00003-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:30c8a0a8f73b1b7c1c50ab3c98034dc26fee93a8303a1083e244fe1f575a2eb1
3
+ size 245840939
data/crawl=CC-MAIN-2013-20/train-00004-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:38f2d52a652848b6f27c626b11dcbee596c2d64e257bde0a6299e2fee531ad6b
3
+ size 245780301
data/crawl=CC-MAIN-2013-20/train-00005-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e1fbdb0b4425dc5d1bbc812f17f51c2c8f6cc5496dbbe1cbebb8743105741d88
3
+ size 257758936
data/crawl=CC-MAIN-2013-20/train-00006-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:748bbe8fcf8c984f12c66b2a25aa46300337fedf32d409560cc64745d410d310
3
+ size 270533399
data/crawl=CC-MAIN-2013-20/train-00007-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42c5edf31d592ab7b781a83615f0583deeb96f060bb6df41ad197238a6544d3a
3
+ size 258143303
data/crawl=CC-MAIN-2013-20/train-00008-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cdba1c22e4fff1cbe2f67c04008b600198c17eb05b5d4739c7023800b8794fd
3
+ size 258145706
data/crawl=CC-MAIN-2013-20/train-00009-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19623306a9c45eb132f08ab329c8ec0bfae0bcc35d5e5d76688a7085b7ed00f6
3
+ size 257754235
data/crawl=CC-MAIN-2013-20/train-00010-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f9be39ff9ab9629c12a5fc98ed353fee2967d28996bd99788b288ff3f4d1dff7
3
+ size 258158002
data/crawl=CC-MAIN-2013-20/train-00011-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7b65e9256305cdaa09c7589143550ecccd59056ead745a548ff0d32d689a9fb5
3
+ size 246179760
data/crawl=CC-MAIN-2013-20/train-00012-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2444448ffd4bcccd4baacc32aef2b38c2fc8f1897a0840232a1e8e412ba0c986
3
+ size 268771572
data/crawl=CC-MAIN-2013-20/train-00013-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f920847fd7dc7ca6fb7096ada5c5f30cc6f1d315630e06cba3e646af87b7a031
3
+ size 245802806
data/crawl=CC-MAIN-2013-20/train-00014-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0b8d6a856c1d5e15dd1c337d45db10237bf520d5c4db683a9d2bf20485b988f
3
+ size 258372422
data/crawl=CC-MAIN-2013-20/train-00015-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6a7978d32d6b87945ca30b7e16c76897f937e78e70b586e6cebbee82dbfa0ac9
3
+ size 244984920
data/crawl=CC-MAIN-2013-20/train-00016-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b84715eca2da02139e3bff9c989bd99521685da66a73b84ab7168c52a183c75f
3
+ size 258075568
data/crawl=CC-MAIN-2013-20/train-00017-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fa3f570fb4062a8e59ab9f750a66d61f22afb1e0e92b5037f83ae170d5746250
3
+ size 245016429
data/crawl=CC-MAIN-2013-20/train-00018-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:98e123a1cb8a26661f75806869c4f44b4563dbf2d38316a28e00d8f263ff23bf
3
+ size 244995194
data/crawl=CC-MAIN-2013-20/train-00019-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:97f41f3a7eca27673b305bfef1b44a32c80cd9f48fdb670efa9eef63f5dd47f2
3
+ size 245490419
data/crawl=CC-MAIN-2013-20/train-00020-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c6a4bbe0f8c3599f3ce8e1ee575d010c3cc25bc5bdaefcd98561d25a665da42
3
+ size 245780253
data/crawl=CC-MAIN-2013-20/train-00021-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0cd1bc0fa04a3f78d81a91e9a47cfde88d6aa1f2f55e5a96671ba2c8a40327f3
3
+ size 258167481
data/crawl=CC-MAIN-2013-20/train-00022-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:54ca85b280da152148b81e7f9adef89f3e70f61ec57bdb70301f339aebbafb08
3
+ size 257639005
data/crawl=CC-MAIN-2013-20/train-00023-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df0a62b0221707d49d03e04a40ebb1d02a938606588a163676f59b5331229a11
3
+ size 257518283
data/crawl=CC-MAIN-2013-20/train-00024-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd76fab7b309189adf0a46ed4faee7a1ac96388637fece7c5707d2b7f857f52c
3
+ size 259074684
data/crawl=CC-MAIN-2013-20/train-00025-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:515c1240256949ca0907626635b22c2ff65324d266731425d4d6d036f013eac4
3
+ size 245065791
data/crawl=CC-MAIN-2013-20/train-00026-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:19d0a609cd343d470c55e713c772b8b56c441ebda9a1ef5702b824e249bfe2f3
3
+ size 245509682
data/crawl=CC-MAIN-2013-20/train-00027-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8eb19726a747631dd67fba28834faaa42cfff137907be55e9387b7f7c4e221e1
3
+ size 257915260
data/crawl=CC-MAIN-2013-20/train-00028-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5afe7eacf1fc6adaf1f51ea3302415cd300fdd4f0aca9b2391b839a1ba8d0fc8
3
+ size 245472872
data/crawl=CC-MAIN-2013-20/train-00029-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:18913536a4153260f8644344176c321b82838adc3d513bebb9c4cdbf4bbe1ccd
3
+ size 257477629
data/crawl=CC-MAIN-2013-20/train-00030-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0442faab9f5f96136dba3071aa18abf7b178290e17bda43b8767e2365fe669af
3
+ size 245558620
data/crawl=CC-MAIN-2013-20/train-00031-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:da156419889400c16f0da0496ff2c86da0eeabfd07ddae9c2ff81add8c4eca6d
3
+ size 246298090
data/crawl=CC-MAIN-2013-20/train-00032-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:780b4c779f2adb01a9bfcce7dbb8148bf19ce22ac7e339686a6f8c1231c95b4d
3
+ size 257559677
data/crawl=CC-MAIN-2013-20/train-00033-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8128f250925bfeaec19fee37cfd5b82c7ca1823d299ab76d25d786a70ef6c16d
3
+ size 258264971
data/crawl=CC-MAIN-2013-20/train-00034-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c1024756c01d3210cc9d38cfe1c51a84c535f0f3a20b93810611b3059d4e067
3
+ size 257441834
data/crawl=CC-MAIN-2013-20/train-00035-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10785a130baae8e8fd48a94bab392e07450ffc91a711cc2dd2c9a846fcbf819f
3
+ size 257938712
data/crawl=CC-MAIN-2013-20/train-00036-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3a009598c3bd1f157161bcd26c3351c4f9c787e1159b2be1d349d8acdf64b736
3
+ size 245526471
data/crawl=CC-MAIN-2013-20/train-00037-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8c46dce1f662ed2a51c328198cec3a713a6de12d75cc51c01f72b08992ed77a5
3
+ size 259038995
data/crawl=CC-MAIN-2013-20/train-00038-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:838020f842d6320aa188961e9943079f263cc88d511432c2c06d2cd305703d67
3
+ size 256865378
data/crawl=CC-MAIN-2013-20/train-00039-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:510a9bcb54c5468154ffec0fb1ca3467ec4b2546000cc8ca7c90e1be2cae8441
3
+ size 245625865
data/crawl=CC-MAIN-2013-20/train-00040-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52fdf4f68997de8dab23881cbec72e55d2991dc8600df71f2bdc9108875464d3
3
+ size 245623388
data/crawl=CC-MAIN-2013-20/train-00041-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27f6c8310b478b344616548c7f664e294b18dbb94ee9b78dd4101ab33ce11611
3
+ size 257353860
data/crawl=CC-MAIN-2013-20/train-00042-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:02bee9038e739edb53b6a9488aee2e64e3125c0afa5cb3701b0304575212f062
3
+ size 257871885
data/crawl=CC-MAIN-2013-20/train-00043-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:408057e3e3a72633d8ddec3d1a03a64443c1581e83bab5d581249fe5299f2210
3
+ size 245765886
data/crawl=CC-MAIN-2013-20/train-00044-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:96f7d42d056dff144937b65689bc2771b418787642512fe09455d032026588e1
3
+ size 257564630
data/crawl=CC-MAIN-2013-20/train-00045-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f35fb7f8600b80b6787c929f834aaa6df5934eac91e842b2807221d1d41e465c
3
+ size 260248562
data/crawl=CC-MAIN-2013-20/train-00046-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6ab84d7cd7add41a0d4366ef928874a8a419ab226240711969d0e0e2a390adf4
3
+ size 269980023
data/crawl=CC-MAIN-2013-20/train-00047-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fbbb796a27b1a973a1013793f1ac8db917f2c5a38c2d0a635fd2a342f16a3106
3
+ size 270561829
data/crawl=CC-MAIN-2013-20/train-00048-of-04233.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7111ffbacdf16202af29f0c6bb49a6e9c8fbde6f0f1d51d949e4f8cc6766b4f8
3
+ size 245529332