Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ language:
|
|
8 |
---
|
9 |
# C4 English Tokenized Samples
|
10 |
|
11 |
-
This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset for natural language processing tasks.
|
12 |
|
13 |
The first 125 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
|
14 |
were tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model. Tokens joined with spaces.
|
|
|
8 |
---
|
9 |
# C4 English Tokenized Samples
|
10 |
|
11 |
+
This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset for natural language processing (NLP) tasks.
|
12 |
|
13 |
The first 125 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
|
14 |
were tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model. Tokens joined with spaces.
|