metadata
annotations_creators:
- expert-generated
language:
- en
language_creators:
- found
multilinguality:
- monolingual
size_categories:
- 100K<n<1M
task_categories:
- feature-extraction
- sentence-similarity
pretty_name: Quora Duplicate Questions
tags:
- sentence-transformers
- evaluation
dataset_info:
- config_name: duplicates
features:
- name: qid1
dtype: string
- name: qid2
dtype: string
splits:
- name: train
num_bytes: 4091278
num_examples: 217838
- name: dev
num_bytes: 382130
num_examples: 20017
- name: test
num_bytes: 1222432
num_examples: 65350
download_size: 4513329
dataset_size: 5695840
- config_name: questions
features:
- name: question
dtype: string
- name: qid
dtype: string
splits:
- name: train
num_bytes: 28494589
num_examples: 376493
- name: dev
num_bytes: 4060422
num_examples: 53485
- name: test
num_bytes: 8163310
num_examples: 107953
download_size: 28791952
dataset_size: 40718321
configs:
- config_name: duplicates
data_files:
- split: train
path: duplicates/train-*
- split: dev
path: duplicates/dev-*
- split: test
path: duplicates/test-*
- config_name: questions
data_files:
- split: train
path: questions/train-*
- split: dev
path: questions/dev-*
- split: test
path: questions/test-*
Dataset Card for Quora Duplicate Questions
This dataset contains the Quora Question Pairs dataset in a format that is easily used with the ParaphraseMiningEvaluator
evaluator in Sentence Transformers. The data was originally created by Quora for this Kaggle Competition.
Usage
from datasets import load_dataset
from sentence_transformers.SentenceTransformer import SentenceTransformer
from sentence_transformers.evaluation import ParaphraseMiningEvaluator
# Load the Quora Duplicates Mining dataset
questions_dataset = load_dataset("sentence-transformers/quora-duplicates-mining", "questions", split="dev")
duplicates_dataset = load_dataset("sentence-transformers/quora-duplicates-mining", "duplicates", split="dev")
# Create a mapping from qid to question & a list of duplicates (qid1, qid2)
qid_to_questions = dict(zip(questions_dataset["qid"], questions_dataset["question"]))
duplicates = list(zip(duplicates_dataset["qid1"], duplicates_dataset["qid2"]))
# Initialize the paraphrase mining evaluator
paraphrase_mining_evaluator = ParaphraseMiningEvaluator(qid_to_questions, duplicates, name="quora-duplicates-dev")
# Load a model to evaluate
model = SentenceTransformer("all-MiniLM-L6-v2")
results = paraphrase_mining_evaluator(model)
print(results)
{
'quora-duplicates-dev_average_precision': 0.5537837023752262,
'quora-duplicates-dev_f1': 0.542585123346778,
'quora-duplicates-dev_precision': 0.5112918195076678,
'quora-duplicates-dev_recall': 0.5779587350751861,
'quora-duplicates-dev_threshold': 0.8290803134441376,
}
Dataset Subsets
questions
subset
- Columns: "question", "qid"
- Column types:
str
,str
- Examples:
{ 'question': 'How do I prepare for TCS IT Wiz?', 'qid': '107646', }
- Collection strategy: A direct copy of the
quora-IR-dataset/duplicate-mining
as generated fromcreate_splits.py
. - Deduplified: No
duplicates
subset
- Columns: "qid1", "qid2"
- Column types:
str
,str
- Examples:
{ 'qid1': '43345', 'qid2': '43346', }
- Collection strategy: A direct copy of the
quora-IR-dataset/duplicate-mining
as generated fromcreate_splits.py
. - Deduplified: No