Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -8,35 +8,32 @@ language:
|
|
8 |
---
|
9 |
# C4 English Tokenized Samples
|
10 |
|
11 |
-
This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset
|
12 |
|
13 |
-
|
14 |
-
|
15 |
-
- **Source**: First 110 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
|
16 |
-
- **Preprocessing**:
|
17 |
-
1. Tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model
|
18 |
-
2. Lowercased
|
19 |
-
3. Tokens joined with spaces
|
20 |
|
21 |
## Features
|
22 |
|
23 |
- `text`: Original text from C4
|
24 |
-
- `tokenized`:
|
25 |
- `num_tokens`: Number of tokens after tokenization
|
|
|
26 |
|
27 |
## Example
|
28 |
|
29 |
```json
|
30 |
{
|
31 |
-
"text": "
|
32 |
-
"tokenized": "
|
33 |
-
"num_tokens":
|
|
|
34 |
}
|
35 |
```
|
36 |
|
37 |
## Usage
|
38 |
|
39 |
-
This
|
40 |
- Text classification tasks
|
41 |
- Language modeling
|
42 |
- Sentiment analysis
|
|
|
8 |
---
|
9 |
# C4 English Tokenized Samples
|
10 |
|
11 |
+
This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset for natural language processing tasks.
|
12 |
|
13 |
+
The first 125 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
|
14 |
+
were tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model. Tokens joined with spaces.
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
## Features
|
17 |
|
18 |
- `text`: Original text from C4
|
19 |
+
- `tokenized`: The tokenized and space-joined text
|
20 |
- `num_tokens`: Number of tokens after tokenization
|
21 |
+
- `num_punct_tokens`: Number of punctuation tokens after tokenization
|
22 |
|
23 |
## Example
|
24 |
|
25 |
```json
|
26 |
{
|
27 |
+
"text": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496.\nSyracuse (1958) . 7.5 x 4.25, cloth, 32 pp, a v.g. copy [...]",
|
28 |
+
"tokenized": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496 . \n Syracuse ( 1958 ) . 7.5 x 4.25 , cloth , 32 pp , a v.g . copy [...]",
|
29 |
+
"num_tokens": 84,
|
30 |
+
"num_punct_tokens": 19
|
31 |
}
|
32 |
```
|
33 |
|
34 |
## Usage
|
35 |
|
36 |
+
This dataset can be useful for:
|
37 |
- Text classification tasks
|
38 |
- Language modeling
|
39 |
- Sentiment analysis
|