File size: 1,978 Bytes
f8fa221
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c57232d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
language:
- en
task_categories:
- sentence-similarity
dataset_info:
  config_name: triplet
  features:
  - name: query
    dtype: string
  - name: positive
    dtype: string
  - name: negative
    dtype: string
  splits:
  - name: train
    num_bytes: 12581563.792427007
    num_examples: 42076
  - name: test
    num_bytes: 3149278.207572993
    num_examples: 10532
  download_size: 1254810
  dataset_size: 15730842
configs:
- config_name: triplet
  data_files:
  - split: train
    path: triplet/train-*
  - split: test
    path: triplet/test-*
---

This dataset is the triplet subset of https://huggingface.co/datasets/sentence-transformers/sql-questions with a train and test split.

The test split can be passed to [`TripletEvaluator`](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#tripletevaluator).

The train and test spilts don't have any queries in common.

<details>
<summary>Here's the full script used to generate this dataset</summary>

```python
import os

import datasets
from sklearn.model_selection import train_test_split


dataset = datasets.load_dataset(
    "sentence-transformers/sql-questions", "triplet", split="train"
)

queries_unique = list({record["query"]: None for record in dataset})
# Use a dict for deterministic (insertion) order
len(queries_unique)

queries_tr, queries_te = train_test_split(
    queries_unique, test_size=0.2, random_state=42
)

queries_tr = set(queries_tr)
queries_te = set(queries_te)
train_dataset = dataset.filter(lambda record: record["query"] in queries_tr)
test_dataset = dataset.filter(lambda record: record["query"] in queries_te)

assert not set(train_dataset["query"]) & set(test_dataset["query"])
assert len(train_dataset) + len(test_dataset) == len(dataset)


dataset_dict = datasets.DatasetDict({"train": train_dataset, "test": test_dataset})
dataset_dict.push_to_hub(
    "aladar/sql-questions", config_name="triplet", token=os.environ["HF_TOKEN_CREATE"]
)

```

</details>