Datasets:

Modalities:
Text
Formats:
parquet
Languages:
code
ArXiv:
Libraries:
Datasets
pandas
License:
parquet-converter commited on
Commit
fce1735
·
1 Parent(s): b38ad58

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,219 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - found
6
- language:
7
- - code
8
- license:
9
- - c-uda
10
- multilinguality:
11
- - other-programming-languages
12
- size_categories:
13
- - 10K<n<100K
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - text2text-generation
18
- task_ids: []
19
- pretty_name: CodeXGlueCcCodeRefinement
20
- tags:
21
- - debugging
22
- dataset_info:
23
- - config_name: medium
24
- features:
25
- - name: id
26
- dtype: int32
27
- - name: buggy
28
- dtype: string
29
- - name: fixed
30
- dtype: string
31
- splits:
32
- - name: train
33
- num_bytes: 32614834
34
- num_examples: 52364
35
- - name: validation
36
- num_bytes: 4086741
37
- num_examples: 6546
38
- - name: test
39
- num_bytes: 4063673
40
- num_examples: 6545
41
- download_size: 39979724
42
- dataset_size: 40765248
43
- - config_name: small
44
- features:
45
- - name: id
46
- dtype: int32
47
- - name: buggy
48
- dtype: string
49
- - name: fixed
50
- dtype: string
51
- splits:
52
- - name: train
53
- num_bytes: 13006719
54
- num_examples: 46680
55
- - name: validation
56
- num_bytes: 1629250
57
- num_examples: 5835
58
- - name: test
59
- num_bytes: 1619708
60
- num_examples: 5835
61
- download_size: 15555421
62
- dataset_size: 16255677
63
- ---
64
-
65
- # Dataset Card for "code_x_glue_cc_code_refinement"
66
-
67
- ## Table of Contents
68
- - [Dataset Description](#dataset-description)
69
- - [Dataset Summary](#dataset-summary)
70
- - [Supported Tasks and Leaderboards](#supported-tasks)
71
- - [Languages](#languages)
72
- - [Dataset Structure](#dataset-structure)
73
- - [Data Instances](#data-instances)
74
- - [Data Fields](#data-fields)
75
- - [Data Splits](#data-splits-sample-size)
76
- - [Dataset Creation](#dataset-creation)
77
- - [Curation Rationale](#curation-rationale)
78
- - [Source Data](#source-data)
79
- - [Annotations](#annotations)
80
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
81
- - [Considerations for Using the Data](#considerations-for-using-the-data)
82
- - [Social Impact of Dataset](#social-impact-of-dataset)
83
- - [Discussion of Biases](#discussion-of-biases)
84
- - [Other Known Limitations](#other-known-limitations)
85
- - [Additional Information](#additional-information)
86
- - [Dataset Curators](#dataset-curators)
87
- - [Licensing Information](#licensing-information)
88
- - [Citation Information](#citation-information)
89
- - [Contributions](#contributions)
90
-
91
- ## Dataset Description
92
-
93
- - **Homepage:** https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement
94
-
95
- ### Dataset Summary
96
-
97
- CodeXGLUE code-refinement dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement
98
-
99
- We use the dataset released by this paper(https://arxiv.org/pdf/1812.08693.pdf). The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.
100
-
101
- ### Supported Tasks and Leaderboards
102
-
103
- - `text2text-generation-other-debugging`: The dataset can be used to train a model for automatically fixing buggy code.
104
-
105
- ### Languages
106
-
107
- - Java **programming** language
108
-
109
- ## Dataset Structure
110
-
111
- ### Data Instances
112
-
113
- #### medium
114
-
115
- An example of 'train' looks as follows.
116
- ```
117
- {
118
- "buggy": "public static TYPE_1 init ( java.lang.String name , java.util.Date date ) { TYPE_1 VAR_1 = new TYPE_1 ( ) ; VAR_1 . METHOD_1 ( name ) ; java.util.Calendar VAR_2 = java.util.Calendar.getInstance ( ) ; VAR_2 . METHOD_2 ( date ) ; VAR_1 . METHOD_3 ( VAR_2 ) ; return VAR_1 ; }\n",
119
- "fixed": "public static TYPE_1 init ( java.lang.String name , java.util.Date date ) { TYPE_1 VAR_1 = new TYPE_1 ( ) ; VAR_1 . METHOD_1 ( name ) ; java.util.Calendar VAR_2 = null ; if ( date != null ) { VAR_2 = java.util.Calendar.getInstance ( ) ; VAR_2 . METHOD_2 ( date ) ; } VAR_1 . METHOD_3 ( VAR_2 ) ; return VAR_1 ; }\n",
120
- "id": 0
121
- }
122
- ```
123
-
124
- #### small
125
-
126
- An example of 'validation' looks as follows.
127
- ```
128
- {
129
- "buggy": "public java.util.List < TYPE_1 > METHOD_1 ( ) { java.util.ArrayList < TYPE_1 > VAR_1 = new java.util.ArrayList < TYPE_1 > ( ) ; for ( TYPE_2 VAR_2 : VAR_3 ) { VAR_1 . METHOD_2 ( VAR_2 . METHOD_1 ( ) ) ; } return VAR_1 ; } \n",
130
- "fixed": "public java.util.List < TYPE_1 > METHOD_1 ( ) { return VAR_1 ; } \n",
131
- "id": 0
132
- }
133
- ```
134
-
135
- ### Data Fields
136
-
137
- In the following each data field in go is explained for each config. The data fields are the same among all splits.
138
-
139
- #### medium, small
140
-
141
- |field name| type | description |
142
- |----------|------|--------------------------------|
143
- |id |int32 | Index of the sample |
144
- |buggy |string| The buggy version of the code |
145
- |fixed |string| The correct version of the code|
146
-
147
- ### Data Splits
148
-
149
- | name |train|validation|test|
150
- |------|----:|---------:|---:|
151
- |medium|52364| 6546|6545|
152
- |small |46680| 5835|5835|
153
-
154
- ## Dataset Creation
155
-
156
- ### Curation Rationale
157
-
158
- [More Information Needed]
159
-
160
- ### Source Data
161
-
162
- #### Initial Data Collection and Normalization
163
-
164
- Downloaded from GitHub Archive every public GitHub event between March 2011 and October 2017 and used the Google BigQuery APIs.
165
- [More Information Needed]
166
-
167
- #### Who are the source language producers?
168
-
169
- Software Engineering developers.
170
-
171
- ### Annotations
172
-
173
- #### Annotation process
174
-
175
- Automatically annotated by filtering commit messages containing the pattern: ("fix" or "solve") and ("bug" or "issue" or "problem" or "error"). A statistically significant amount of samples (95% confidence level with 5% confidence interval) were manually evaluated by two authors to check if the filtered bug/fix pairs were correct. After all disagreements were settled, authors conclude that 97.6% were true positives.
176
-
177
- #### Who are the annotators?
178
-
179
- Heuristics and the authors of the paper.
180
-
181
- ### Personal and Sensitive Information
182
-
183
- [More Information Needed]
184
-
185
- ## Considerations for Using the Data
186
-
187
- ### Social Impact of Dataset
188
-
189
- [More Information Needed]
190
-
191
- ### Discussion of Biases
192
-
193
- [More Information Needed]
194
-
195
- ### Other Known Limitations
196
-
197
- [More Information Needed]
198
-
199
- ## Additional Information
200
-
201
- ### Dataset Curators
202
-
203
- https://github.com/microsoft, https://github.com/madlag
204
-
205
- ### Licensing Information
206
-
207
- Computational Use of Data Agreement (C-UDA) License.
208
-
209
- ### Citation Information
210
-
211
- ```
212
- @article{CodeXGLUE,
213
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
214
- year={2020},}
215
- ```
216
-
217
- ### Contributions
218
-
219
- Thanks to @madlag (and partly also @ncoop57) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
code_x_glue_cc_code_refinement.py DELETED
@@ -1,93 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
- from .common import TrainValidTestChild
6
- from .generated_definitions import DEFINITIONS
7
-
8
-
9
- _DESCRIPTION = """We use the dataset released by this paper(https://arxiv.org/pdf/1812.08693.pdf). The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length."""
10
- _CITATION = """@article{10.1145/3340544,
11
- author = {Tufano, Michele and Watson, Cody and Bavota, Gabriele and Penta, Massimiliano Di and White, Martin and Poshyvanyk, Denys},
12
- title = {An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation},
13
- year = {2019},
14
- issue_date = {October 2019},
15
- publisher = {Association for Computing Machinery},
16
- address = {New York, NY, USA},
17
- volume = {28},
18
- number = {4},
19
- issn = {1049-331X},
20
- url = {https://doi-org.proxy.wm.edu/10.1145/3340544},
21
- doi = {10.1145/3340544},
22
- abstract = {Millions of open source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs. To explore such a potential, we perform an empirical study to assess the feasibility of using Neural Machine Translation techniques for learning bug-fixing patches for real defects. First, we mine millions of bug-fixes from the change histories of projects hosted on GitHub in order to extract meaningful examples of such bug-fixes. Next, we abstract the buggy and corresponding fixed code, and use them to train an Encoder-Decoder model able to translate buggy code into its fixed version. In our empirical investigation, we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9--50% of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different Abstract Syntax Tree operations and generate candidate patches in a split second.},
23
- journal = {ACM Trans. Softw. Eng. Methodol.},
24
- month = sep,
25
- articleno = {19},
26
- numpages = {29},
27
- keywords = {bug-fixes, Neural machine translation}
28
- }"""
29
-
30
-
31
- class CodeXGlueCcCodeRefinementImpl(TrainValidTestChild):
32
- _DESCRIPTION = _DESCRIPTION
33
- _CITATION = _CITATION
34
-
35
- _FEATURES = {
36
- "id": datasets.Value("int32"), # Index of the sample
37
- "buggy": datasets.Value("string"), # The buggy version of the code
38
- "fixed": datasets.Value("string"), # The correct version of the code
39
- }
40
-
41
- _SUPERVISED_KEYS = ["fixed"]
42
-
43
- def generate_urls(self, split_name):
44
- size = self.info["parameters"]["size"]
45
- for key in "buggy", "fixed":
46
- yield key, f"{size}/{split_name}.buggy-fixed.{key}"
47
-
48
- def _generate_examples(self, split_name, file_paths):
49
- """This function returns the examples in the raw (text) form."""
50
- # Open each file (one for java, and one for c#)
51
- files = {k: open(file_paths[k], encoding="utf-8") for k in file_paths}
52
-
53
- id_ = 0
54
- while True:
55
- # Read a single line from each file
56
- entries = {k: files[k].readline() for k in file_paths}
57
-
58
- empty = self.check_empty(entries)
59
- if empty:
60
- # We are done: end of files
61
- return
62
-
63
- entries["id"] = id_
64
- yield id_, entries
65
- id_ += 1
66
-
67
-
68
- CLASS_MAPPING = {
69
- "CodeXGlueCcCodeRefinement": CodeXGlueCcCodeRefinementImpl,
70
- }
71
-
72
-
73
- class CodeXGlueCcCodeRefinement(datasets.GeneratorBasedBuilder):
74
- BUILDER_CONFIG_CLASS = datasets.BuilderConfig
75
- BUILDER_CONFIGS = [
76
- datasets.BuilderConfig(name=name, description=info["description"]) for name, info in DEFINITIONS.items()
77
- ]
78
-
79
- def _info(self):
80
- name = self.config.name
81
- info = DEFINITIONS[name]
82
- if info["class_name"] in CLASS_MAPPING:
83
- self.child = CLASS_MAPPING[info["class_name"]](info)
84
- else:
85
- raise RuntimeError(f"Unknown python class for dataset configuration {name}")
86
- ret = self.child._info()
87
- return ret
88
-
89
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
90
- return self.child._split_generators(dl_manager=dl_manager)
91
-
92
- def _generate_examples(self, split_name, file_paths):
93
- return self.child._generate_examples(split_name, file_paths)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
common.py DELETED
@@ -1,75 +0,0 @@
1
- from typing import List
2
-
3
- import datasets
4
-
5
-
6
- # Citation, taken from https://github.com/microsoft/CodeXGLUE
7
- _DEFAULT_CITATION = """@article{CodeXGLUE,
8
- title={CodeXGLUE: A Benchmark Dataset and Open Challenge for Code Intelligence},
9
- year={2020},}"""
10
-
11
-
12
- class Child:
13
- _DESCRIPTION = None
14
- _FEATURES = None
15
- _CITATION = None
16
- SPLITS = {"train": datasets.Split.TRAIN}
17
- _SUPERVISED_KEYS = None
18
-
19
- def __init__(self, info):
20
- self.info = info
21
-
22
- def homepage(self):
23
- return self.info["project_url"]
24
-
25
- def _info(self):
26
- # This is the description that will appear on the datasets page.
27
- return datasets.DatasetInfo(
28
- description=self.info["description"] + "\n\n" + self._DESCRIPTION,
29
- features=datasets.Features(self._FEATURES),
30
- homepage=self.homepage(),
31
- citation=self._CITATION or _DEFAULT_CITATION,
32
- supervised_keys=self._SUPERVISED_KEYS,
33
- )
34
-
35
- def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
36
- SPLITS = self.SPLITS
37
- _URL = self.info["raw_url"]
38
- urls_to_download = {}
39
- for split in SPLITS:
40
- if split not in urls_to_download:
41
- urls_to_download[split] = {}
42
-
43
- for key, url in self.generate_urls(split):
44
- if not url.startswith("http"):
45
- url = _URL + "/" + url
46
- urls_to_download[split][key] = url
47
-
48
- downloaded_files = {}
49
- for k, v in urls_to_download.items():
50
- downloaded_files[k] = dl_manager.download_and_extract(v)
51
-
52
- return [
53
- datasets.SplitGenerator(
54
- name=SPLITS[k],
55
- gen_kwargs={"split_name": k, "file_paths": downloaded_files[k]},
56
- )
57
- for k in SPLITS
58
- ]
59
-
60
- def check_empty(self, entries):
61
- all_empty = all([v == "" for v in entries.values()])
62
- all_non_empty = all([v != "" for v in entries.values()])
63
-
64
- if not all_non_empty and not all_empty:
65
- raise RuntimeError("Parallel data files should have the same number of lines.")
66
-
67
- return all_empty
68
-
69
-
70
- class TrainValidTestChild(Child):
71
- SPLITS = {
72
- "train": datasets.Split.TRAIN,
73
- "valid": datasets.Split.VALIDATION,
74
- "test": datasets.Split.TEST,
75
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"medium": {"description": "CodeXGLUE code-refinement dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement\n\nWe use the dataset released by this paper(https://arxiv.org/pdf/1812.08693.pdf). The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.", "citation": "@article{10.1145/3340544,\nauthor = {Tufano, Michele and Watson, Cody and Bavota, Gabriele and Penta, Massimiliano Di and White, Martin and Poshyvanyk, Denys},\ntitle = {An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation},\nyear = {2019},\nissue_date = {October 2019},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {28},\nnumber = {4},\nissn = {1049-331X},\nurl = {https://doi-org.proxy.wm.edu/10.1145/3340544},\ndoi = {10.1145/3340544},\nabstract = {Millions of open source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs. To explore such a potential, we perform an empirical study to assess the feasibility of using Neural Machine Translation techniques for learning bug-fixing patches for real defects. First, we mine millions of bug-fixes from the change histories of projects hosted on GitHub in order to extract meaningful examples of such bug-fixes. Next, we abstract the buggy and corresponding fixed code, and use them to train an Encoder-Decoder model able to translate buggy code into its fixed version. In our empirical investigation, we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9--50% of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different Abstract Syntax Tree operations and generate candidate patches in a split second.},\njournal = {ACM Trans. Softw. Eng. Methodol.},\nmonth = sep,\narticleno = {19},\nnumpages = {29},\nkeywords = {bug-fixes, Neural machine translation}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/code-refinement", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "buggy": {"dtype": "string", "id": null, "_type": "Value"}, "fixed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "fixed", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_code_refinement", "config_name": "medium", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 32614834, "num_examples": 52364, "dataset_name": "code_x_glue_cc_code_refinement"}, "validation": {"name": "validation", "num_bytes": 4086741, "num_examples": 6546, "dataset_name": "code_x_glue_cc_code_refinement"}, "test": {"name": "test", "num_bytes": 4063673, "num_examples": 6545, "dataset_name": "code_x_glue_cc_code_refinement"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/medium/train.buggy-fixed.buggy": {"num_bytes": 16188348, "checksum": "4570731680fa183650864e8729a7354d235c9a3ef42f0085ace3441418074085"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/medium/train.buggy-fixed.fixed": {"num_bytes": 15798070, "checksum": "009c121662602642bc55f6882f220aea6a738e6a11f2c4df86e7fe3cd30c175c"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/medium/valid.buggy-fixed.buggy": {"num_bytes": 2028309, "checksum": "8ad01f88be2009599007f40427458d6d2601fe93f2f1d65b0f46b7d414a3add2"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/medium/valid.buggy-fixed.fixed": {"num_bytes": 1979872, "checksum": "7ef5e4b2e95914e0eceb4f2cf6dfad0641625145319b9836db70d3f8745ad2d6"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/medium/test.buggy-fixed.buggy": {"num_bytes": 2014594, "checksum": "21107528c3b25bfdec24d0c4c18a953de31c26f3795a7d7c9e108a60396bcd38"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/medium/test.buggy-fixed.fixed": {"num_bytes": 1970531, "checksum": "4b13298647e9a782bf908d4a26710e97a1846f5513a9bf1aa46ac8223fb84b3d"}}, "download_size": 39979724, "post_processing_size": null, "dataset_size": 40765248, "size_in_bytes": 80744972}, "small": {"description": "CodeXGLUE code-refinement dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement\n\nWe use the dataset released by this paper(https://arxiv.org/pdf/1812.08693.pdf). The source side is a Java function with bugs and the target side is the refined one. All the function and variable names are normalized. Their dataset contains two subsets ( i.e.small and medium) based on the function length.", "citation": "@article{10.1145/3340544,\nauthor = {Tufano, Michele and Watson, Cody and Bavota, Gabriele and Penta, Massimiliano Di and White, Martin and Poshyvanyk, Denys},\ntitle = {An Empirical Study on Learning Bug-Fixing Patches in the Wild via Neural Machine Translation},\nyear = {2019},\nissue_date = {October 2019},\npublisher = {Association for Computing Machinery},\naddress = {New York, NY, USA},\nvolume = {28},\nnumber = {4},\nissn = {1049-331X},\nurl = {https://doi-org.proxy.wm.edu/10.1145/3340544},\ndoi = {10.1145/3340544},\nabstract = {Millions of open source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs. To explore such a potential, we perform an empirical study to assess the feasibility of using Neural Machine Translation techniques for learning bug-fixing patches for real defects. First, we mine millions of bug-fixes from the change histories of projects hosted on GitHub in order to extract meaningful examples of such bug-fixes. Next, we abstract the buggy and corresponding fixed code, and use them to train an Encoder-Decoder model able to translate buggy code into its fixed version. In our empirical investigation, we found that such a model is able to fix thousands of unique buggy methods in the wild. Overall, this model is capable of predicting fixed patches generated by developers in 9--50% of the cases, depending on the number of candidate patches we allow it to generate. Also, the model is able to emulate a variety of different Abstract Syntax Tree operations and generate candidate patches in a split second.},\njournal = {ACM Trans. Softw. Eng. Methodol.},\nmonth = sep,\narticleno = {19},\nnumpages = {29},\nkeywords = {bug-fixes, Neural machine translation}\n}", "homepage": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/code-refinement", "license": "", "features": {"id": {"dtype": "int32", "id": null, "_type": "Value"}, "buggy": {"dtype": "string", "id": null, "_type": "Value"}, "fixed": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": {"input": "fixed", "output": ""}, "task_templates": null, "builder_name": "code_x_glue_cc_code_refinement", "config_name": "small", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 13006719, "num_examples": 46680, "dataset_name": "code_x_glue_cc_code_refinement"}, "validation": {"name": "validation", "num_bytes": 1629250, "num_examples": 5835, "dataset_name": "code_x_glue_cc_code_refinement"}, "test": {"name": "test", "num_bytes": 1619708, "num_examples": 5835, "dataset_name": "code_x_glue_cc_code_refinement"}}, "download_checksums": {"https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/small/train.buggy-fixed.buggy": {"num_bytes": 6509949, "checksum": "dfb4366dedb73dd40f78c3af870ccb0a1aeff2d9ceb45585df26c99897740748"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/small/train.buggy-fixed.fixed": {"num_bytes": 5936570, "checksum": "c98b1139265d33e787a9dd742a464e7eb5bd137ebb3fcb54f0416ee7672739f3"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/small/valid.buggy-fixed.buggy": {"num_bytes": 815315, "checksum": "e83a0c524cdce5a4492dfe0c9bc7d642aa5ed267ddbae5ffd1e563fe54cae6b8"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/small/valid.buggy-fixed.fixed": {"num_bytes": 743907, "checksum": "dc9c23594350988dcdaf456a1e8eea1dc86c81d1c0ff9985ff6d63d15196ffd2"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/small/test.buggy-fixed.buggy": {"num_bytes": 809941, "checksum": "d2e675094f471b3bfbb9419eb7cf14dca11f8d41185319d3310cbc3f517df323"}, "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data/small/test.buggy-fixed.fixed": {"num_bytes": 739739, "checksum": "e7ec462d00d253ddec3d1cef7a06cfc99db1cfe94039a6607ed14afeef6be04a"}}, "download_size": 15555421, "post_processing_size": null, "dataset_size": 16255677, "size_in_bytes": 31811098}}
 
 
generated_definitions.py DELETED
@@ -1,24 +0,0 @@
1
- DEFINITIONS = {
2
- "medium": {
3
- "class_name": "CodeXGlueCcCodeRefinement",
4
- "dataset_type": "Code-Code",
5
- "description": "CodeXGLUE code-refinement dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement",
6
- "dir_name": "code-refinement",
7
- "name": "medium",
8
- "parameters": {"size": "medium"},
9
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/code-refinement",
10
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data",
11
- "sizes": {"test": 6545, "train": 52364, "validation": 6546},
12
- },
13
- "small": {
14
- "class_name": "CodeXGlueCcCodeRefinement",
15
- "dataset_type": "Code-Code",
16
- "description": "CodeXGLUE code-refinement dataset, available at https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/code-refinement",
17
- "dir_name": "code-refinement",
18
- "name": "small",
19
- "parameters": {"size": "small"},
20
- "project_url": "https://github.com/madlag/CodeXGLUE/tree/main/Code-Code/code-refinement",
21
- "raw_url": "https://raw.githubusercontent.com/madlag/CodeXGLUE/main/Code-Code/code-refinement/data",
22
- "sizes": {"test": 5835, "train": 46680, "validation": 5835},
23
- },
24
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
medium/code_x_glue_cc_code_refinement-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f3cfa8fdf0a95918bdb5723c586919206075cf597c44ad1642c9ed53b47da426
3
+ size 1488082
medium/code_x_glue_cc_code_refinement-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e0f3e44f3140e948dc635809086c922c60453ef76fb5c584c5ffc84b835e4b04
3
+ size 11943276
medium/code_x_glue_cc_code_refinement-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ed8ac79c6b6e67b7dbb83f7e4836ae34810b71087472364e4ac3edfb3fe2d22c
3
+ size 1498198
small/code_x_glue_cc_code_refinement-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c1759c7f8253c63c142474213839f9601100128870a6457c9d254fc0673a3fb9
3
+ size 588577
small/code_x_glue_cc_code_refinement-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dae33832f8b5df001b775592c49074b2d016a89d29e1f63e2fc9e276957cb6ac
3
+ size 4715250
small/code_x_glue_cc_code_refinement-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:72af4b2470df9c9f4eb3ad7adaa19f59c23f76065c1d3213ebdfe2d19b436610
3
+ size 590632