File size: 1,193 Bytes
777597a
12e96bc
777597a
ce8d255
12e96bc
 
 
 
 
 
e6feeca
12e96bc
9bd2ac8
 
12e96bc
 
 
 
9bd2ac8
12e96bc
 
 
 
 
 
777597a
 
 
 
 
 
c50a959
 
777597a
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
---
license: apache-2.0
size_categories:
- 1M<n<10M
tags:
- code
dataset_info:
  features:
  - name: max_stars_count
    dtype: int64
  - name: text
    dtype: string
  - name: token_count
    dtype: int64
  splits:
  - name: train
    num_bytes: 10787104987
    num_examples: 2130812
  download_size: 3723229232
  dataset_size: 10787104987
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Common Starcoder dataset

This dataset is generated from [bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata).

Total GPT2 Tokens: 4,649,163,171

## Generation Process
1. We filtered the original dataset with common language: C, Cpp, Java, Python and JSON.
2. We removed some columns for mixing up with other dataset: "id", "max_stars_repo_path", "max_stars_repo_name"
3. After removing the irrelevant fields, we shuffle the dataset with random seed=42.
4. We filtered the data on "max_stars_count" > 300 and shuffle again.
5. We further reduced the dataset size by select(range(current_size, 2_500_000)), However there are only 2.13M samples left.
6. Add "n_tokens" by using GPT2Tokenizer to count the tokens in the "content" field.