metadata
license: cc-by-nc-sa-4.0
language:
- de
tags:
- embeddings
- clustering
- benchmark
size_categories:
- 10K<n<100K
This dataset can be used as a benchmark for clustering word embeddings for German.
The datasets contains news article titles and is based on the dataset of the One Million Posts Corpus and 10kGNAD. It contains 10'275 unique samples, 10 splits with 1'436 to 9'962 samples and 9 unique classes. Splits are built similarly to MTEB's TwentyNewsgroupsClustering.
Have a look at German Text Embedding Clustering Benchmark (Github, Paper) for more infos, datasets and evaluation results.
If you use this dataset in your work, please cite the following paper:
@inproceedings{wehrli-etal-2023-german,
title = "{G}erman Text Embedding Clustering Benchmark",
author = "Wehrli, Silvan and
Arnrich, Bert and
Irrgang, Christopher",
editor = "Georges, Munir and
Herygers, Aaricia and
Friedrich, Annemarie and
Roth, Benjamin",
booktitle = "Proceedings of the 19th Conference on Natural Language Processing (KONVENS 2023)",
month = sep,
year = "2023",
address = "Ingolstadt, Germany",
publisher = "Association for Computational Lingustics",
url = "https://aclanthology.org/2023.konvens-main.20",
pages = "187--201",
}