Datasets:
Tasks:
Text Retrieval
Modalities:
Text
Formats:
json
Sub-tasks:
document-retrieval
Languages:
Slovak
Size:
10K - 100K
Tags:
text-retrieval
DOI:
License:
File size: 11,680 Bytes
6354035 88f7830 6354035 88f7830 6354035 88f7830 1e470f0 88f7830 1f9cabd 88f7830 8cfa602 88f7830 0ea73b6 88f7830 038947a 88f7830 6354035 37cd802 e159e57 2da63d4 e159e57 37cd802 e159e57 1f136b6 e159e57 95a720e e159e57 37cd802 95a720e 37cd802 95a720e 37cd802 95a720e 37cd802 0557ce9 95a720e d313929 95a720e 0557ce9 95a720e e159e57 95a720e e159e57 95a720e 3529c08 1d7dc11 e159e57 2da63d4 e159e57 2da63d4 e159e57 95a720e e159e57 f9ffa39 e159e57 f9ffa39 e159e57 f9ffa39 e159e57 d9930b9 2f42c4a 9c1a727 c75dff6 9c1a727 2f42c4a e159e57 bb54718 e159e57 d5ef46e e159e57 f9ffa39 e159e57 bb54718 e159e57 f9ffa39 71d8322 f9ffa39 e159e57 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 |
---
language:
- sk
license: cc-by-nc-sa-4.0
tags:
- text-retrieval
size_categories:
- 1K<n<10K
multilinguality:
- monolingual
task_categories:
- text-retrieval
source_datasets:
- TUKE-DeutscheTelekom/skquad
task_ids:
- document-retrieval
config_names:
- corpus
dataset_info:
- config_name: default
features:
- name: query-id
dtype: string
- name: corpus-id
dtype: string
- name: score
dtype: float64
splits:
- name: test
num_bytes: 356665
num_examples: 12451
- config_name: corpus
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
splits:
- name: corpus
num_bytes: 8467582
num_examples: 6477
- config_name: queries
features:
- name: _id
dtype: string
- name: text
dtype: string
splits:
- name: queries
num_bytes: 82163
num_examples: 1134
configs:
- config_name: default
data_files:
- split: test
path: qrels/test.jsonl
- config_name: corpus
data_files:
- split: corpus
path: corpus.jsonl
- config_name: queries
data_files:
- split: queries
path: queries.jsonl
---
# Dataset Card for retrieval-skquad
## Table of Contents
- [Dataset Description](##dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data Splits](#data-instances)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
## Dataset Description
- **Homepage:** [NLP KEMT TUKE](https://nlp.kemt.fei.tuke.sk)
- **Paper:** [Needs More Information]
### Dataset Summary
STS SK-QuAD Retrieval is a unique dataset designed to evaluate Slovak search performance using metrics like MRR, MAP, and NDCG, derived from the [SK-QuAD](https://huggingface.co/datasets/TUKE-DeutscheTelekom/skquad) dataset. It features questions and answers sourced from a search engine before annotation. The annotated data assigns categories to the best answers for each question, enhancing Slovak language search evaluation. This dataset is a significant step forward in the development of Slovak language search evaluation and provides a valuable resource for further research and development in this area.
### Languages
Slovak
## Dataset Structure
The dataset follows strucure recommended by [BEIR](https://github.com/beir-cellar/beir/wiki/Load-your-custom-dataset) toolkit.
**corpus.jsonl** : contains a list of dictionaries, each with three fields _id with unique document identifier, title of document and text of a paragraph.
For example:
```json
{"_id": "598395",
"title": "Vysoký grúň (Laborecká vrchovina)",
"text": "Cez vrch Vysoký grúň vedie hlavná červená turistická značka, ktorá zároveň vedie po hlavnom karpatskom hrebeni cez najvýchodnejší bod Slovenska – trojmedzie (1207.7 Mnm) na vrchu Kremenec (1221.0 Mnm) a prechádza po slovensko-poľskej štátnej hranici cez viacero vrchov s viacerými panoramatickými vyhliadkami, ako napr. Kamenná lúka (1200.9 Mnm), Jarabá skala (1199.0 Mnm), Ďurkovec (1188.7 Mnm), Pľaša (1162.8 Mnm), ďalej cez Ruské sedlo (801.0 Mnm), vrchy Rypy (1002.7 Mnm), Strop, (1011.2 Mnm), Černiny (929.4 Mnm), Laborecký priesmyk (684.0 Mnm) až k Duklianskemu priesmyku (502.0 Mnm)."}
```
**queries.jsonl** : contains a list of dictionaries, each with two fields _id with unique query identifier and text with query text. For example: {"_id": "q1", "text": "Who developed the mass-energy equivalence formula?"}
For example:
```json
{"_id": "1000005",
"text": "Akú nadmorskú výšku má vrch Kremenec ?"
}
```
**qrels/test.tsv** : a .tsv file (tab-seperated) that contains three columns, the query-id, corpus-id and score in this order.
For example:
```
# query-id corpus-id score
1000005 598395 5
1000005 576721 0
1000005 576728 0
1000005 146843 4
1000005 520490 2
```
Scores of the answers are based on the annotators decisions:
- 5 and 4: paragraph contains relevant answer
- 2 : paragraph is partially relevant
- 0 : paragraphs is no relevant
### Evaluation of an embedding model
For evaluation of an embedding model with this dataset, you can use HF datasets and BEIR toolit:
Example of evaluation of a model:
```python
from beir import util, LoggingHandler
from beir.retrieval import models
from beir.datasets.data_loader import GenericDataLoader
from beir.retrieval.evaluation import EvaluateRetrieval
from beir.retrieval.search.dense import DenseRetrievalExactSearch as DRES
from huggingface_hub import snapshot_download
import logging
import pathlib, os
#### Just some code to print debug information to stdout
logging.basicConfig(format='%(asctime)s - %(message)s',
datefmt='%Y-%m-%d %H:%M:%S',
level=logging.INFO,
handlers=[LoggingHandler()])
#
data_path = snapshot_download(repo_id="TUKE-KEMT/retrieval-skquad",repo_type="dataset")
model_path = "TUKE-DeutscheTelekom/slovakbert-skquad-mnlr"
model = DRES(models.SentenceBERT(model_path), batch_size=16)
corpus, queries, qrels = GenericDataLoader(data_folder=data_path).load(split="test")
#### Load the SBERT model and retrieve using cosine-similarity
retriever = EvaluateRetrieval(model, score_function="dot") # or "cos_sim" for cosine similarity
results = retriever.retrieve(corpus, queries)
#### Evaluate your model with NDCG@k, MAP@K, Recall@K and Precision@K where k = [1,3,5,10,100,1000]
ndcg, _map, recall, precision = retriever.evaluate(qrels, results, retriever.k_values)
```
### Database Content
| Number of Questions | Total Answers |
|---------------------|---------------|
| 945 | 19845 |
| Correct Answers | Count of Questions |
|-----------------|---------------------|
| 2 | 466 |
| 3 | 250 |
| 4 | 119 |
| 5 | 60 |
| 6 | 20 |
| 7 | 12 |
| 8 | 11 |
| 9 | 4 |
| 14 | 1 |
| 19 | 1 |
| 20 | 1 |
| Total | 945 |
## Dataset Creation
### Curation Rationale
The curation rationale for this dataset stemmed from the necessity to evaluate search performance in the Slovak language context. By selecting questions from the [SK-QuAD](https://huggingface.co/datasets/TUKE-DeutscheTelekom/skquad) dataset and annotating them with relevant answers obtained from a search engine, the dataset aims to provide a standardized benchmark for assessing Slovak language search effectiveness.
### Source Data
#### Initial Data Collection and Normalization
Initial data collection and normalization involved selecting questions from the first manually annotated dataset, [SK-QuAD](https://huggingface.co/datasets/TUKE-DeutscheTelekom/skquad). Only corresponding questions were chosen to ensure relevance and consistency in the dataset. This process useful to maintain the quality of the data for subsequent evaluation.
#### Who are the source language producers?
The creator is a student from the Department of Electronics and Multimedia Telecommunications ([KEMT](https://kemt.fei.tuke.sk)) on Faculty of Electrical Engineering and Informatics ([FEI TUKE](https://www.fei.tuke.sk/en)) of the Technical University of Košice ([TUKE](https://www.tuke.sk/wps/portal/tuke)). The dataset was developed as part of the student's master's thesis titled **Semantic Search in Slovak Text**.
### Annotations
#### Annotation process
The annotation process involved sourcing questions and their corresponding answers from the [SK-QuAD](https://huggingface.co/datasets/TUKE-DeutscheTelekom/skquad) dataset. Before annotation, answers to each question were obtained using a semantic search with model [slovakbert-skquad-mnlr](https://huggingface.co/TUKE-DeutscheTelekom/slovakbert-skquad-mnlr). During annotation, the best answers were identified and categorized based on relevance.
**There are relevant categories:**
- Category 0: Answers in this category were deemed irrelevant or overlooked during the annotation process, indicating a lack of alignment with the query or inadequacy in addressing the question's intent.
- Category 1: Representing the highest level of relevance, answers categorized under this label were sourced directly from the SK-QuAD dataset and were verified to be accurate and comprehensive responses to the questions.
- Category 2: Answers classified as Category 2 exhibited direct relevance to the posed questions, providing informative and pertinent information that effectively addressed the query's scope.
- Category 3: Answers falling into Category 3 demonstrated a degree of relevance to the questions but were considered weakly relevant. These responses may contain some relevant information but might lack precision or comprehensiveness in addressing the query.
- Category 4: In contrast, Category 4 encompassed answers marked by evaluators as not relevant to the questions. These responses failed to provide meaningful or accurate information, indicating a disconnect from the query's intent or context.
By categorizing answers based on their relevancy levels, the annotation process aimed to ensure the dataset's quality and utility for evaluating search performance accurately in the Slovak language context. These relevancy categories facilitate nuanced analysis and interpretation of search results, enabling comprehensive assessments of search effectiveness and providing valuable insights for further research and development in the field of information retrieval and natural language processing.
#### Who are the annotators?
Students from Faculty of Electrical Engineering and Informatics Technical University of Košice.
### Personal and Sensitive Information
## Considerations for Using the Data
In the dataset, Slovak Wikipedia includes a wealth of information about various individuals, including famous personalities, as well as groups or organizations. It's important to handle this information with care, ensuring compliance with ethical standards and privacy regulations when analyzing or processing data related to individuals or groups.
### Social Impact of Dataset
This dataset will contribute significantly to enhancing Slovakian search engines by providing valuable insights and data for evaluation purposes. It has the potential to improve the efficiency and relevance of search results in Slovak or multilanguage texts.
## Additional Information
### Dataset Curators
Technical University of Košice
### Licensing Information
license: cc-by-nc-sa-4.0
### Citation Information
[Needs More Information] |