Update ReadMe
Browse files
README.md
CHANGED
@@ -1,3 +1,62 @@
|
|
1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
tags:
|
6 |
+
- ccnet
|
7 |
+
- english
|
8 |
+
- common crawl
|
9 |
+
- pretraining
|
10 |
+
- internet
|
11 |
+
- massive
|
12 |
+
- text corpora
|
13 |
+
- general text
|
14 |
+
pretty_name: CCNet split (4M)
|
15 |
+
size_categories:
|
16 |
+
- 1M<n<10M
|
17 |
+
---
|
18 |
|
19 |
+
# CCNet Reproduced Split (4M)
|
20 |
+
|
21 |
+
## Overview
|
22 |
+
|
23 |
+
This dataset is a reproduced subset of the larger CCNet dataset, tailored specifically to facilitate easier access and processing for researchers needing high-quality, web-crawled text data for natural language processing tasks. The CCNet dataset leverages data from the Common Crawl, a non-profit organization that crawls the web and freely provides its archives to the public. This subset contains 4 million datapoints that have been carefully processed and structured.
|
24 |
+
|
25 |
+
The CCNet repository is archived and unmaintained, so we hope this is useful for those who require to use this dataset.
|
26 |
+
|
27 |
+
Feel free to contact me if you have any questions or requests.
|
28 |
+
|
29 |
+
## Dataset Description
|
30 |
+
|
31 |
+
### Data Collection
|
32 |
+
|
33 |
+
The data in this subset was collected as part of the Common Crawl project and has been processed to retain only high-quality text contents with valuable metadata. Each entry in the dataset is derived from web pages across a diverse set of domains, ensuring a broad representation of the internet.
|
34 |
+
|
35 |
+
### Data Format
|
36 |
+
|
37 |
+
Each entry in the dataset is stored as a compressed JSON object in a newline-delimited format (JSONL). This format makes it straightforward to parse the data lazily, one record at a time, which is memory efficient for large datasets.
|
38 |
+
|
39 |
+
#### Fields
|
40 |
+
|
41 |
+
- `url`: The webpage URL from which the content is derived (part of CC).
|
42 |
+
- `date_download`: The date when the webpage was downloaded.
|
43 |
+
- `digest`: SHA1 digest of the webpage's content.
|
44 |
+
- `length`: Number of characters in the webpage content after deduplication.
|
45 |
+
- `nlines`: Number of lines in the webpage content after deduplication.
|
46 |
+
- `source_domain`: Web domain of the webpage.
|
47 |
+
- `title`: Title of the webpage (part of CC).
|
48 |
+
- `raw_content`: The text content of the webpage after deduplication.
|
49 |
+
- `original_nlines`: Number of lines before deduplication.
|
50 |
+
- `original_length`: Number of characters before deduplication.
|
51 |
+
- `language`: Language of the webpage, detected using FastText Language Identification.
|
52 |
+
- `language_score`: Confidence score of the detected language.
|
53 |
+
- `perplexity`: Perplexity of the text based on a language model trained on Wikipedia.
|
54 |
+
|
55 |
+
## Usage
|
56 |
+
|
57 |
+
This dataset is suitable for use in pre-training language models, studying internet-based text, and other NLP tasks that require diverse text inputs. To use this dataset, load it via the Hugging Face Datasets library:
|
58 |
+
|
59 |
+
```python
|
60 |
+
from datasets import load_dataset
|
61 |
+
|
62 |
+
dataset = load_dataset("ccnet_split_4m")
|