blindsubmissions commited on
Commit
d583690
·
1 Parent(s): 7f97a7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -3
README.md CHANGED
@@ -56,9 +56,22 @@ size_categories:
56
  ---
57
  # M2CRB
58
 
59
- ## How to get the data with a given language combination
 
 
 
 
60
 
 
 
 
 
 
61
  ```
 
 
 
 
62
  from datasets import load_dataset
63
 
64
  def get_dataset(prog_lang, nat_lang):
@@ -69,10 +82,70 @@ def get_dataset(prog_lang, nat_lang):
69
  and example["language"] == prog_lang
70
  )
71
 
72
- test_data = datasets_loader.dataset(test_data)
73
-
74
  return test_data
75
  ```
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ## Licensing Information
78
  M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the original licenses.
 
56
  ---
57
  # M2CRB
58
 
59
+ ## Dataset Summary
60
+ M2CRB contains pairs of text and code data with multiple natural and programming language pairs. Namely: Spanish, Portuguese, German, and French, each paired with code snippets for: Python, Java, and JavaScript. The data is curated via an automated filtering pipeline from source files within [The Stack](https://huggingface.co/datasets/bigcode/the-stack) followed by human verification to ensure accurate language classification I.e., humans were asked to filter out data for which natural language did not correspond to a target language.
61
+
62
+ ## Supported Tasks
63
+ M2CRB is a multilingual evaluation dataset for code-to-text and/or text-to-code models, both on information retrieval or conditional generation evaluations.
64
 
65
+ ## Currently Supported Languages
66
+
67
+ ```python
68
+ NATURAL_LANGUAGE_SET = {"es", "fr", "pt", "de"}
69
+ PROGRAMMING_LANGUAGE_SET = {"python", "java", "javascript"}
70
  ```
71
+
72
+ ## How to get the data with a given language combination
73
+
74
+ ```python
75
  from datasets import load_dataset
76
 
77
  def get_dataset(prog_lang, nat_lang):
 
82
  and example["language"] == prog_lang
83
  )
84
 
 
 
85
  return test_data
86
  ```
87
 
88
+ ## Dataset Structure
89
+
90
+ ### Data Instances
91
+ Each data instance corresponds to function/methods occurring in licensed files that compose The Stack. That is, files with permissive licences collected from GitHub.
92
+
93
+ ### Relevant Data Fields
94
+
95
+ - identifier (string): Function/method name.
96
+ - parameters (string): Function parameters.
97
+ - return_statement (string): Return statement if found during parsing.
98
+ - docstring (string): Complete docstring content.
99
+ - docstring_summary (string): Summary/processed docstring dropping args and return statements.
100
+ - function (string): Actual function/method content.
101
+ - argument_list (null): List of arguments.
102
+ - language (string): Programming language of the function.
103
+ - docstring_language (string): Natural language of the docstring.
104
+ - type (string): Return type if found during parsing.
105
+
106
+ ## Summary of data curation pipeline
107
+
108
+ - Filtering out repositories that appear in [CodeSearchNet](https://huggingface.co/datasets/code_search_net).
109
+ - Filtering the files that belong to the programming languages of interest.
110
+ - Pre-filtering the files that likely contain text in the natural languages of interest.
111
+ - AST parsing with [Tree-sitter](\url{https://tree-sitter.github.io/tree-sitter/).
112
+ - Perform language identification of docstrings in the resulting set of functions/methods.
113
+ - Perform human verification/validation of the underlying language of docstrings.
114
+
115
+ ## Social Impact of the dataset
116
+
117
+ M2CRB is released with the aim to increase the coverage of the NLP for code research community by providing data from scarce combinations of languages. We expect this data to help enable more accurate information retrieval systems and text-to-code or code-to-text summarization on languages other than English.
118
+
119
+ As a subset of The Stack, this dataset inherits de-risking efforts carried out when that dataset was built, though we highlight risks exist and malicious use of the data could exist such as, for instance, to aid on creation of malicious code. We highlight however, that this is a risk shared by releases of any code dataset made available.
120
+
121
+
122
+ ## Discussion of Biases
123
+
124
+ The data is collected from GitHub and naturally occurring text on that platform may contain harmful or offensive language, which could be learned by the models.
125
+
126
+ Moreover, certain language combinations are more or less likely to contain well documented code and, as such, resulting data will not be uniformly represented.
127
+
128
+ ## Known limitations
129
+
130
+ While we cover 16 scarce combinations of programming and natural languages, our evaluation dataset can be expanded to further improve its coverage.
131
+ Moreover, we use text naturally occurring as comments or docstrings as opposed to human annotators. As such, resulting data will have high variance in terms of quality and depend on good practices of sub-communities of software developers. However, we remark that the task our evaluation dataset defines is reflective of what searching on a real codebase would look like.
132
+ Finally, we that some imbalance on data is observed due to the same reason since Certain language combinations are more or less likely to contain well documented code.
133
+
134
+ ## Maintenance plan:
135
+
136
+ The data will be kept up to date by following The Stack releases. We should rerun our pipeline for every new release and add non-overlapping new content to both training and testing partitions of our data.
137
+
138
+ This is so that we carry over opt-out updates and include fresh repos.
139
+
140
+ ## Update plan:
141
+
142
+ - Short term:
143
+ - Cover all 6 programming languages from CodeSearchNet.
144
+
145
+ - Long-term
146
+ - Add an extra test set containing human-generated text/code pairs so the gap between in-the-wild and controlled performances can be measured.
147
+ - Include extra natural languages.
148
+
149
+
150
  ## Licensing Information
151
  M2CRB is a subset filtered and pre-processed from [The Stack](https://huggingface.co/datasets/bigcode/the-stack), a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in M2CRB must abide by the terms of the original licenses.