metadata
language:
- en
multilinguality:
- monolingual
size_categories:
- 1M<n<10M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: Wikipedia Sections
tags:
- sentence-transformers
dataset_info:
- config_name: pair
features:
- name: anchor
dtype: string
- name: positive
dtype: string
splits:
- name: train
num_bytes: 490913561
num_examples: 1779417
- name: validation
num_bytes: 60891304
num_examples: 220400
- name: test
num_bytes: 61385426
num_examples: 222957
download_size: 295222520
dataset_size: 613190291
- config_name: triplet
features:
- name: anchor
dtype: string
- name: positive
dtype: string
- name: negative
dtype: string
splits:
- name: train
num_bytes: 733058519
num_examples: 1779417
- name: validation
num_bytes: 90881953
num_examples: 220400
- name: test
num_bytes: 91705993
num_examples: 222957
download_size: 500545462
dataset_size: 915646465
configs:
- config_name: pair
data_files:
- split: train
path: pair/train-*
- split: validation
path: pair/validation-*
- split: test
path: pair/test-*
- config_name: triplet
data_files:
- split: train
path: triplet/train-*
- split: validation
path: triplet/validation-*
- split: test
path: triplet/test-*
Dataset Card for Wikipedia Sections
This dataset contains pairs and triplets that can be used to train and finetune Sentence Transformer embedding models. The dataset originates from Dor et al., and was downloaded from this download link. Notably, the "anchor" column contains sentences from Wikipedia, wheras the "positive" column contains other sentences from the same section. The "negative" column contains sentences from other sections.
Dataset Subsets
pair
subset
- Columns: "anchor", "positive"
- Column types:
str
,str
- Examples:
- Collection strategy: Reading the Wikipedia Sections dataset from https://sbert.net.
- Deduplified: Yes
triplet
subset
- Columns: "anchor", "positive", "negative"
- Column types:
str
,str
,str
- Examples:
- Collection strategy: Reading the Wikipedia Sections dataset from https://sbert.net.
- Deduplified: Yes