agentlans commited on
Commit
35f6264
·
verified ·
1 Parent(s): 3d9a014

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -13
README.md CHANGED
@@ -8,35 +8,32 @@ language:
8
  ---
9
  # C4 English Tokenized Samples
10
 
11
- This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset. It provides a preprocessed subset of the original C4 data, making it easier to use for various natural language processing tasks.
12
 
13
- ## Dataset Details
14
-
15
- - **Source**: First 110 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
16
- - **Preprocessing**:
17
- 1. Tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model
18
- 2. Lowercased
19
- 3. Tokens joined with spaces
20
 
21
  ## Features
22
 
23
  - `text`: Original text from C4
24
- - `tokenized`: Preprocessed text (tokenized, lowercased, and space-joined)
25
  - `num_tokens`: Number of tokens after tokenization
 
26
 
27
  ## Example
28
 
29
  ```json
30
  {
31
- "text": "The Denver Board of Education opened the 2017-18 school year with an update on projects that include new construction, upgrades, heat mitigation [...]",
32
- "tokenized": "the denver board of education opened the 2017 - 18 school year with an update on projects that include new construction , upgrades , heat mitigation [...]",
33
- "num_tokens": 192
 
34
  }
35
  ```
36
 
37
  ## Usage
38
 
39
- This preprocessed dataset can be particularly useful for:
40
  - Text classification tasks
41
  - Language modeling
42
  - Sentiment analysis
 
8
  ---
9
  # C4 English Tokenized Samples
10
 
11
+ This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset for natural language processing tasks.
12
 
13
+ The first 125 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
14
+ were tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model. Tokens joined with spaces.
 
 
 
 
 
15
 
16
  ## Features
17
 
18
  - `text`: Original text from C4
19
+ - `tokenized`: The tokenized and space-joined text
20
  - `num_tokens`: Number of tokens after tokenization
21
+ - `num_punct_tokens`: Number of punctuation tokens after tokenization
22
 
23
  ## Example
24
 
25
  ```json
26
  {
27
+ "text": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496.\nSyracuse (1958) . 7.5 x 4.25, cloth, 32 pp, a v.g. copy [...]",
28
+ "tokenized": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496 . \n Syracuse ( 1958 ) . 7.5 x 4.25 , cloth , 32 pp , a v.g . copy [...]",
29
+ "num_tokens": 84,
30
+ "num_punct_tokens": 19
31
  }
32
  ```
33
 
34
  ## Usage
35
 
36
+ This dataset can be useful for:
37
  - Text classification tasks
38
  - Language modeling
39
  - Sentiment analysis