added code for train-test split
Browse files
README.md
CHANGED
@@ -54,8 +54,7 @@ Each instance in the dataset consists of:
|
|
54 |
3. *template*: A string containing the corresponding SPARQL query
|
55 |
|
56 |
### Train-Test Split
|
57 |
-
Train and test sets are split in a balanced way by the template name. That is, if there are more than 20 data tuples per template_name, 20 are assigned to the test set and the rest is assigned to the train set. If there are less than 20 data tuples per template_name, 10% of the data tuples are assigned to the test set and the rest is assigned to the train set. If there is only one data tuple per template_name, which is the case for 2 templates, the data tuple is assigned solely into the train set.
|
58 |
-
|
59 |
## Intended Use
|
60 |
|
61 |
This dataset is intended for research in semantic parsing and related tasks within the context of lexicographic data in the Wikidata Knowledge Graph. It can be used to train and evaluate models that convert natural language to SPARQL queries.
|
@@ -107,3 +106,38 @@ Please cite the following if you use this dataset in your work:
|
|
107 |
|
108 |
For any questions or issues with the dataset, please contact the author at [email protected].
|
109 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
3. *template*: A string containing the corresponding SPARQL query
|
55 |
|
56 |
### Train-Test Split
|
57 |
+
Train and test sets are split in a balanced way by the template name. That is, if there are more than 20 data tuples per template_name, 20 are assigned to the test set and the rest is assigned to the train set. If there are less than 20 data tuples per template_name, 10% of the data tuples are assigned to the test set and the rest is assigned to the train set. If there is only one data tuple per template_name, which is the case for 2 templates, the data tuple is assigned solely into the train set. The code for generating the train-test split can be found in the appendix.
|
|
|
58 |
## Intended Use
|
59 |
|
60 |
This dataset is intended for research in semantic parsing and related tasks within the context of lexicographic data in the Wikidata Knowledge Graph. It can be used to train and evaluate models that convert natural language to SPARQL queries.
|
|
|
106 |
|
107 |
For any questions or issues with the dataset, please contact the author at [email protected].
|
108 |
|
109 |
+
## Appendix
|
110 |
+
|
111 |
+
##### Code to Generate the Train-Test Split on full_data in Python
|
112 |
+
```
|
113 |
+
from sklearn.model_selection import train_test_split
|
114 |
+
|
115 |
+
# get the template names
|
116 |
+
template_names = lexicographicDataWikidataSPARQL['template_name'].unique()
|
117 |
+
|
118 |
+
# get the test set
|
119 |
+
test_set = pd.DataFrame()
|
120 |
+
train_set = pd.DataFrame()
|
121 |
+
|
122 |
+
|
123 |
+
for template_name in template_names:
|
124 |
+
# get the samples for the template_name
|
125 |
+
samples = lexicographicDataWikidataSPARQL[lexicographicDataWikidataSPARQL['template_name'] == template_name]
|
126 |
+
|
127 |
+
# if there are less than 20 samples, get at least one or 0.1% into the test set
|
128 |
+
if len(samples) <= 20 and len(samples) > 1:
|
129 |
+
print(f"{template_name} has less or equal to 20 samples")
|
130 |
+
train, test = train_test_split(samples, test_size=0.1)
|
131 |
+
elif len(samples) == 1:
|
132 |
+
print(f"{template_name} has only 1 sample")
|
133 |
+
train = samples
|
134 |
+
else:
|
135 |
+
print(f"{template_name} has more than 20 samples")
|
136 |
+
train, test = train_test_split(samples, test_size=20)
|
137 |
+
|
138 |
+
|
139 |
+
test_set = pd.concat([test_set, test])
|
140 |
+
train_set = pd.concat([train_set, train])
|
141 |
+
```
|
142 |
+
|
143 |
+
|