|
--- |
|
license: apache-2.0 |
|
size_categories: |
|
- 1M<n<10M |
|
tags: |
|
- code |
|
dataset_info: |
|
features: |
|
- name: max_stars_count |
|
dtype: int64 |
|
- name: text |
|
dtype: string |
|
- name: token_count |
|
dtype: int64 |
|
splits: |
|
- name: train |
|
num_bytes: 10787104987 |
|
num_examples: 2130812 |
|
download_size: 3723229232 |
|
dataset_size: 10787104987 |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
--- |
|
|
|
# Common Starcoder dataset |
|
|
|
This dataset is generated from [bigcode/starcoderdata](https://huggingface.co/datasets/bigcode/starcoderdata). |
|
|
|
Total GPT2 Tokens: 4,649,163,171 |
|
|
|
## Generation Process |
|
1. We filtered the original dataset with common language: C, Cpp, Java, Python and JSON. |
|
2. We removed some columns for mixing up with other dataset: "id", "max_stars_repo_path", "max_stars_repo_name" |
|
3. After removing the irrelevant fields, we shuffle the dataset with random seed=42. |
|
4. We filtered the data on "max_stars_count" > 300 and shuffle again. |
|
5. We further reduced the dataset size by select(range(current_size, 2_500_000)), However there are only 2.13M samples left. |
|
6. Add "n_tokens" by using GPT2Tokenizer to count the tokens in the "content" field. |