slvnwhrl's picture
fix missing sentence in readme
8a6526f verified
---
license: cc-by-nc-sa-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
---
This dataset can be used as a benchmark for clustering word embeddings for <b>German</b>.
The datasets contains news article titles and is based on the dataset of the [One Million Posts Corpus](https://ofai.github.io/million-post-corpus/) and [10kGNAD](https://github.com/tblock/10kGNAD). It contains 10'275 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's [TwentyNewsgroupsClustering](https://huggingface.co/datasets/mteb/twentynewsgroups-clustering).
Have a look at German Text Embedding Clustering Benchmark ([Github](https://github.com/ClimSocAna/tecb-de), [Paper](https://arxiv.org/abs/2401.02709)) for more infos, datasets and evaluation results.
If you use this dataset in your work, please cite the following paper:
```
@inproceedings{wehrli-etal-2023-german,
title = "{G}erman Text Embedding Clustering Benchmark",
author = "Wehrli, Silvan and
Arnrich, Bert and
Irrgang, Christopher",
editor = "Georges, Munir and
Herygers, Aaricia and
Friedrich, Annemarie and
Roth, Benjamin",
booktitle = "Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)",
month = sep,
year = "2023",
address = "Ingolstadt, Germany",
publisher = "Association for Computational Lingustics",
url = "https://aclanthology.org/2023.konvens-main.20",
pages = "187--201",
}
```