Datasets:
Tasks:
Text Generation
Modalities:
Text
Sub-tasks:
language-modeling
Languages:
English
Size:
10M - 100M
License:
File size: 4,810 Bytes
4105ae6 4732cbe f05a3e0 4105ae6 f05a3e0 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe 670eddb 4732cbe 670eddb 4732cbe 670eddb 4732cbe 670eddb 4f81cc3 670eddb 4f81cc3 4105ae6 4732cbe 4f81cc3 4105ae6 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe 4105ae6 4732cbe f05a3e0 4732cbe 4105ae6 4732cbe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 |
---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
language:
- en
license:
- cc-by-4.0
- gfdl
multilinguality:
- monolingual
size_categories:
- 100M<n<200M
source_datasets:
- https://github.com/shibing624/code-autocomplete
- https://github.com/bharathgs/Awesome-pytorch-list
- https://github.com/akullpp/awesome-java
- https://github.com/fffaraz/awesome-cpp
task_categories:
- sequence-modeling
task_ids:
- language-modeling
---
# Dataset Card for "SourceCode"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Repository:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
- **Leaderboard:** [leaderboard](https://github.com/shibing624/code-autocomplete) (located on the homepage)
- **Size of downloaded dataset files:** 105 MB
- **Total amount of disk used:** 570 MB
### Dataset Summary
Source code dataset is a collection of Github awesome repos, it contains Python, Java, C++, and other programming languages.
This dataset can be used in different NLP tasks like language modeling and text generation tasks.
data source:
- PYTHON_CODE: https://github.com/bharathgs/Awesome-pytorch-list
- JAVA_CODE: https://github.com/akullpp/awesome-java
- CPP_CODE: https://github.com/fffaraz/awesome-cpp
### Supported Tasks and Leaderboards
- language modeling
- code generation tasks, **Leaderboard:** [code-autocomplete](https://github.com/shibing624/code-autocomplete)
### Languages
- programming languages: Python, Java, C++
- natural language: English
## Dataset Structure
### Data Instances
An example of 'train' looks as follows.
```
This example was too long and was cropped:
{
"text": """
import json
import argparse
def _parse_args():
parser = argparse.ArgumentParser(
description=__doc__,
formatter_class=argparse.RawTextHelpFormatter,
)
parser.add_argument(
'--model-file',
required=True,
help=(
'A pt file from '
'https://github.com/pytorch/fairseq/tree/main/examples/hubert'
)
)
return parser.parse_args()
"""
}
```
### Data Fields
The data fields are the same among all splits.
- `text`: a `string` feature.
### Data Splits
#### python
```shell
$ wc -l python/*
10000 python/test.txt
5215412 python/train.txt
10000 python/valid.txt
5235412 total
```
#### java
```shell
$ wc -l java/*
950083 java/test.txt
2802880 java/train.txt
940803 java/valid.txt
4693766 total
```
#### cpp
```shell
$ wc -l cpp/*
1060014 cpp/test.txt
3119241 cpp/train.txt
1099124 cpp/valid.txt
5278379 total
```
## Dataset Creation
### Curation Rationale
As code generation dataset, I upload it to huggingface datasets.
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
Citation:
APA:
```latex
Xu, M. code-autocomplete: Code AutoComplete with GPT2 model (Version 0.0.4) [Computer software]. https://github.com/shibing624/code-autocomplete
```
BibTeX:
```latex
@software{Xu_code-autocomplete_Code_AutoComplete,
author = {Xu, Ming},
title = {code-autocomplete: Code AutoComplete with GPT2 model},
url = {https://github.com/shibing624/code-autocomplete},
version = {0.0.4}
}
```
### Annotations
#### Annotation process
#### Who are the annotators?
nobody
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
This dataset was developed as a benchmark for evaluating code generation model.
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
Github awesome programing code repos.
### Licensing Information
GNU Free Documentation License v1.3 or later.
For research use only.
### Contributions
Thanks to [@shibing624](https://github.com/shibing624) add this dataset.
|