Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Andrianos's picture
Updated bibtex file
0b76537 verified
metadata
dataset_info:
  features:
    - name: sts-id
      dtype: string
    - name: sts-score
      dtype: float64
    - name: sentence1
      dtype: string
    - name: sentence2
      dtype: string
    - name: paraphrase
      dtype: int64
    - name: Human Annotation - P1
      dtype: int64
    - name: Human Annotation - P2
      dtype: int64
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: test
      num_bytes: 58088
      num_examples: 338
  download_size: 37035
  dataset_size: 58088
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*
license: apache-2.0
task_categories:
  - text-classification
language:
  - en
pretty_name: STS-H

STS-Hard Test Set

The STS-Hard dataset is a paraphrase detection test set derived from the STSBenchmark dataset. It was introduced as part of the PARAPHRASUS: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models. The test set includes the paraphrase label as well as individual annotation labels from two annotators:

  • P1: The semanticist.
  • P2: A student annotator.

For more details, refer to the original paper that was presented at COLING 2025.


Citation

If you use this dataset, please cite it using the following BibTeX entry:

@inproceedings{michail-etal-2025-paraphrasus,
    title = "{PARAPHRASUS}: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models",
    author = "Michail, Andrianos  and
      Clematide, Simon  and
      Opitz, Juri",
    editor = "Rambow, Owen  and
      Wanner, Leo  and
      Apidianaki, Marianna  and
      Al-Khalifa, Hend  and
      Eugenio, Barbara Di  and
      Schockaert, Steven",
    booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
    month = jan,
    year = "2025",
    address = "Abu Dhabi, UAE",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2025.coling-main.585/",
    pages = "8749--8762",
    abstract = "The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we create PARAPHRASUS, a benchmark designed for multi-dimensional assessment, benchmarking and selection of paraphrase detection models. We find that paraphrase detection models under our fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset. Furthermore, PARAPHRASUS allows prompt calibration for different use cases, tailoring LLM models to specific strictness levels. PARAPHRASUS includes 3 challenges spanning over 10 datasets, including 8 repurposed and 2 newly annotated; we release it along with a benchmarking library at https://github.com/impresso/paraphrasus"
}