|
--- |
|
configs: |
|
- config_name: default |
|
data_files: |
|
- split: train |
|
path: data/train-* |
|
- split: test |
|
path: data/test-* |
|
dataset_info: |
|
features: |
|
- name: text1 |
|
dtype: string |
|
- name: text2 |
|
dtype: string |
|
- name: label |
|
dtype: |
|
class_label: |
|
names: |
|
'0': '-1' |
|
'1': '1' |
|
splits: |
|
- name: train |
|
num_bytes: 150266647.47592032 |
|
num_examples: 50712 |
|
- name: test |
|
num_bytes: 64403801.52407967 |
|
num_examples: 21735 |
|
download_size: 129675237 |
|
dataset_size: 214670449.0 |
|
--- |
|
# Dataset Card for "WikiMedical_sentence_similarity" |
|
|
|
WikiMedical_sentence_similarity is an adapted and ready-to-use sentence similarity dataset based on [this dataset](https://huggingface.co/datasets/gamino/wiki_medical_terms). |
|
|
|
The preprocessing followed three steps: |
|
- Each text is splitted into sentences of 256 tokens (nltk tokenizer) |
|
- Each sentence is paired with a positive pair if found, and a negative one. Negative one are drawn randomly in the whole dataset. |
|
- Train and test split correspond to 70%/30% |
|
|
|
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |