Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
Andrianos commited on
Commit
0b76537
·
verified ·
1 Parent(s): 6c9cd48

Updated bibtex file

Browse files
Files changed (1) hide show
  1. README.md +19 -8
README.md CHANGED
@@ -52,12 +52,23 @@ For more details, refer to the [original paper](https://arxiv.org/abs/2409.12060
52
  If you use this dataset, please cite it using the following BibTeX entry:
53
 
54
  ```bibtex
55
- @misc{michail2024paraphrasuscomprehensivebenchmark,
56
- title={PARAPHRASUS : A Comprehensive Benchmark for Evaluating Paraphrase Detection Models},
57
- author={Andrianos Michail and Simon Clematide and Juri Opitz},
58
- year={2024},
59
- eprint={2409.12060},
60
- archivePrefix={arXiv},
61
- primaryClass={cs.CL},
62
- url={https://arxiv.org/abs/2409.12060},
 
 
 
 
 
 
 
 
 
 
 
63
  }
 
52
  If you use this dataset, please cite it using the following BibTeX entry:
53
 
54
  ```bibtex
55
+ @inproceedings{michail-etal-2025-paraphrasus,
56
+ title = "{PARAPHRASUS}: A Comprehensive Benchmark for Evaluating Paraphrase Detection Models",
57
+ author = "Michail, Andrianos and
58
+ Clematide, Simon and
59
+ Opitz, Juri",
60
+ editor = "Rambow, Owen and
61
+ Wanner, Leo and
62
+ Apidianaki, Marianna and
63
+ Al-Khalifa, Hend and
64
+ Eugenio, Barbara Di and
65
+ Schockaert, Steven",
66
+ booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
67
+ month = jan,
68
+ year = "2025",
69
+ address = "Abu Dhabi, UAE",
70
+ publisher = "Association for Computational Linguistics",
71
+ url = "https://aclanthology.org/2025.coling-main.585/",
72
+ pages = "8749--8762",
73
+ abstract = "The task of determining whether two texts are paraphrases has long been a challenge in NLP. However, the prevailing notion of paraphrase is often quite simplistic, offering only a limited view of the vast spectrum of paraphrase phenomena. Indeed, we find that evaluating models in a paraphrase dataset can leave uncertainty about their true semantic understanding. To alleviate this, we create PARAPHRASUS, a benchmark designed for multi-dimensional assessment, benchmarking and selection of paraphrase detection models. We find that paraphrase detection models under our fine-grained evaluation lens exhibit trade-offs that cannot be captured through a single classification dataset. Furthermore, PARAPHRASUS allows prompt calibration for different use cases, tailoring LLM models to specific strictness levels. PARAPHRASUS includes 3 challenges spanning over 10 datasets, including 8 repurposed and 2 newly annotated; we release it along with a benchmarking library at https://github.com/impresso/paraphrasus"
74
  }