Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Sub-tasks:
document-retrieval
Languages:
code
Size:
100K - 1M
ArXiv:
License:
update readme
Browse files- README.md +60 -92
- repobench-r.py +8 -8
README.md
CHANGED
@@ -18,116 +18,84 @@ task_ids:
|
|
18 |
|
19 |
# Dataset Card for RepoBench-R
|
20 |
|
21 |
-
## Table of Contents
|
22 |
-
- [Dataset Card for RepoBench-R](#dataset-card-for-repobench-r)
|
23 |
-
- [Table of Contents](#table-of-contents)
|
24 |
-
- [Dataset Description](#dataset-description)
|
25 |
-
- [Dataset Summary](#dataset-summary)
|
26 |
-
- [Supported Tasks](#supported-tasks)
|
27 |
-
- [Dataset Structure](#dataset-structure)
|
28 |
-
- [Dataset Creation](#dataset-creation)
|
29 |
-
- [Curation Rationale](#curation-rationale)
|
30 |
-
- [Source Data](#source-data)
|
31 |
-
- [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
|
32 |
-
- [Who are the source language producers?](#who-are-the-source-language-producers)
|
33 |
-
- [Annotations](#annotations)
|
34 |
-
- [Annotation process](#annotation-process)
|
35 |
-
- [Who are the annotators?](#who-are-the-annotators)
|
36 |
-
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
37 |
-
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
38 |
-
- [Social Impact of Dataset](#social-impact-of-dataset)
|
39 |
-
- [Discussion of Biases](#discussion-of-biases)
|
40 |
-
- [Other Known Limitations](#other-known-limitations)
|
41 |
-
- [Additional Information](#additional-information)
|
42 |
-
- [Licensing Information](#licensing-information)
|
43 |
-
- [Citation Information](#citation-information)
|
44 |
-
- [Contributions](#contributions)
|
45 |
-
|
46 |
## Dataset Description
|
47 |
|
48 |
- **Homepage:** https://github.com/Leolty/repobench
|
49 |
- **Paper:** https://arxiv.org/abs/2306.03091
|
50 |
|
51 |
-
|
52 |
|
53 |
-
RepoBench-R is a subtask of [RepoBench](https://github.com/Leolty/repobench), targeting the retrieval component of a repository-level auto-completion
|
54 |
-
system, focusing on extracting the most relevant code snippet from a project repository for next-line
|
55 |
code prediction.
|
56 |
|
57 |
-
|
58 |
-
|
59 |
-
The dataset supports two programming languages, Python and Java, and contains two settings:
|
60 |
-
|
61 |
-
- `cff`: short for `cross_file_first`, where the cross-file module in next line is first used in the current file.
|
62 |
-
- `cfr`: short for `cross_file_random`, where the cross-file module in next line is not first used in the current file.
|
63 |
-
|
64 |
-
For each setting, we provide `train` and `test` subset, and each subset with two levels of difficulty: `easy` and `hard`.
|
65 |
-
|
66 |
-
Suppose the number of code snippets in the context is \\(k\\),
|
67 |
-
|
68 |
-
- For the `easy` subset, we have \\(5 \leq k < 10\\).
|
69 |
-
- For the `hard` subset, we have \\(k \geq 10\\).
|
70 |
-
|
71 |
-
|
72 |
-
## Dataset Structure
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
## Dataset Creation
|
77 |
-
|
78 |
-
### Curation Rationale
|
79 |
-
|
80 |
-
[More Information Needed]
|
81 |
-
|
82 |
-
### Source Data
|
83 |
|
84 |
-
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
92 |
-
|
|
|
|
|
|
|
93 |
|
94 |
-
|
95 |
|
96 |
-
|
|
|
|
|
|
|
97 |
|
98 |
-
|
99 |
|
100 |
-
|
101 |
|
102 |
-
|
|
|
103 |
|
104 |
-
|
|
|
105 |
|
106 |
-
##
|
107 |
-
|
108 |
-
### Social Impact of Dataset
|
109 |
-
|
110 |
-
[More Information Needed]
|
111 |
-
|
112 |
-
### Discussion of Biases
|
113 |
-
|
114 |
-
[More Information Needed]
|
115 |
-
|
116 |
-
### Other Known Limitations
|
117 |
-
|
118 |
-
[More Information Needed]
|
119 |
-
|
120 |
-
## Additional Information
|
121 |
-
|
122 |
-
|
123 |
-
### Licensing Information
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Citation Information
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
### Contributions
|
132 |
|
133 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
# Dataset Card for RepoBench-R
|
20 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
## Dataset Description
|
22 |
|
23 |
- **Homepage:** https://github.com/Leolty/repobench
|
24 |
- **Paper:** https://arxiv.org/abs/2306.03091
|
25 |
|
26 |
+
## Dataset Summary
|
27 |
|
28 |
+
RepoBench-R is a subtask of [RepoBench](https://github.com/Leolty/repobench), targeting the retrieval component of a repository-level auto-completion system, focusing on retrieving the most relevant code snippet from a project repository for next-line
|
|
|
29 |
code prediction.
|
30 |
|
31 |
+
## Settings
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
|
33 |
+
- `cff`: short for cross_file_first, indicating the cross-file module in next line is first used in the current file.
|
34 |
|
35 |
+
- `cfr`: short for cross_file_random, indicating the cross-file module in next line is not first used in the current file.
|
36 |
|
37 |
+
## Supported Tasks
|
38 |
|
39 |
+
The dataset has 4 subsets:
|
40 |
|
41 |
+
- `python-cff`: python dataset with `cff` setting.
|
42 |
+
- `python-cfr`: python dataset with `cfr` setting.
|
43 |
+
- `java-cff`: java dataset with `cff` setting.
|
44 |
+
- `java-cfr`: java dataset with `cfr` setting.
|
45 |
|
46 |
+
Each subset has 4 splits:
|
47 |
|
48 |
+
- `train-easy`: training set with easy difficulty, where the number of code snippets in the context $$k$$ satisfies $$ 5 \leq k < 10 $$.
|
49 |
+
- `train-hard`: training set with hard difficulty, where the number of code snippets in the context $$k$$ satisfies $$ k \geq 10 $$.
|
50 |
+
- `test-easy`: testing set with easy difficulty.
|
51 |
+
- `test-hard`: testing set with hard difficulty.
|
52 |
|
53 |
+
## Loading Data
|
54 |
|
55 |
+
For example, if you want to load the `test` `cross-file-first` `python` dataset with `easy` difficulty, you can use the following code:
|
56 |
|
57 |
+
```python
|
58 |
+
from datasets import load_dataset
|
59 |
|
60 |
+
dataset = load_dataset("tianyang/repobench-r", "python-cff", "test-easy")
|
61 |
+
```
|
62 |
|
63 |
+
## Dataset Structure
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
+
```
|
66 |
+
{
|
67 |
+
"repo_name": "repository name of the data point",
|
68 |
+
"file_path": "path/to/file",
|
69 |
+
"context": [
|
70 |
+
"snippet 1",
|
71 |
+
"snippet 2",
|
72 |
+
// ...
|
73 |
+
"snippet k"
|
74 |
+
],
|
75 |
+
"import_statement": "all import statements in the file",
|
76 |
+
"gold_snippet_idex": 2, // the index of the gold snippet in the context list, 0~k-1
|
77 |
+
"code": "the code for next-line prediction",
|
78 |
+
"next_line": "the next line of the code"
|
79 |
+
}
|
80 |
+
```
|
81 |
+
|
82 |
+
## Licensing Information
|
83 |
+
|
84 |
+
CC BY-NC-ND 4.0
|
85 |
+
|
86 |
+
## Citation Information
|
87 |
+
|
88 |
+
```bibtex
|
89 |
+
@misc{liu2023repobench,
|
90 |
+
title={RepoBench: Benchmarking Repository-Level Code Auto-Completion Systems},
|
91 |
+
author={Tianyang Liu and Canwen Xu and Julian McAuley},
|
92 |
+
year={2023},
|
93 |
+
eprint={2306.03091},
|
94 |
+
archivePrefix={arXiv},
|
95 |
+
primaryClass={cs.CL}
|
96 |
+
}
|
97 |
+
```
|
98 |
+
|
99 |
+
## Contributions
|
100 |
+
|
101 |
+
Thanks to [@Leolty](https://github.com/Leolty) for adding this dataset.
|
repobench-r.py
CHANGED
@@ -117,20 +117,20 @@ class RepoBenchR(datasets.GeneratorBasedBuilder):
|
|
117 |
|
118 |
return [
|
119 |
datasets.SplitGenerator(
|
120 |
-
name=datasets.Split("
|
121 |
-
gen_kwargs={"data_dir": data_dir, "split": "
|
122 |
),
|
123 |
datasets.SplitGenerator(
|
124 |
name=datasets.Split("train_hard"),
|
125 |
-
gen_kwargs={"data_dir": data_dir, "split": "
|
126 |
),
|
127 |
datasets.SplitGenerator(
|
128 |
-
name=datasets.Split("
|
129 |
-
gen_kwargs={"data_dir": data_dir, "split": "
|
130 |
),
|
131 |
datasets.SplitGenerator(
|
132 |
-
name=datasets.Split("
|
133 |
-
gen_kwargs={"data_dir": data_dir, "split": "
|
134 |
)
|
135 |
]
|
136 |
|
@@ -139,7 +139,7 @@ class RepoBenchR(datasets.GeneratorBasedBuilder):
|
|
139 |
with gzip.open(data_dir, "rb") as f:
|
140 |
data = pickle.load(f)
|
141 |
|
142 |
-
subset, level = split.split("
|
143 |
|
144 |
for i, example in enumerate(data[subset][level]):
|
145 |
yield i, {
|
|
|
117 |
|
118 |
return [
|
119 |
datasets.SplitGenerator(
|
120 |
+
name=datasets.Split("train-easy"),
|
121 |
+
gen_kwargs={"data_dir": data_dir, "split": "train-easy"},
|
122 |
),
|
123 |
datasets.SplitGenerator(
|
124 |
name=datasets.Split("train_hard"),
|
125 |
+
gen_kwargs={"data_dir": data_dir, "split": "train-hard"},
|
126 |
),
|
127 |
datasets.SplitGenerator(
|
128 |
+
name=datasets.Split("test-easy"),
|
129 |
+
gen_kwargs={"data_dir": data_dir, "split": "test-easy"},
|
130 |
),
|
131 |
datasets.SplitGenerator(
|
132 |
+
name=datasets.Split("test-hard"),
|
133 |
+
gen_kwargs={"data_dir": data_dir, "split": "test-hard"},
|
134 |
)
|
135 |
]
|
136 |
|
|
|
139 |
with gzip.open(data_dir, "rb") as f:
|
140 |
data = pickle.load(f)
|
141 |
|
142 |
+
subset, level = split.split("-")
|
143 |
|
144 |
for i, example in enumerate(data[subset][level]):
|
145 |
yield i, {
|