File size: 1,881 Bytes
f4b2ad4
 
 
 
 
 
 
 
d243889
f4b2ad4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
language:
- ko
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- Ko-StrategyQA
task_categories:
- text-retrieval
task_ids:
- document-retrieval
config_names:
- corpus
tags:
- text-retrieval
dataset_info:
  - config_name: default
    features:
      - name: query-id
        dtype: string
      - name: corpus-id
        dtype: string
      - name: score
        dtype: float64
    splits:
      - name: train
        num_bytes: 236940
        num_examples: 4377
      - name: dev
        num_bytes: 61724
        num_examples: 1145
  - config_name: corpus
    features:
      - name: _id
        dtype: string
      - name: title
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: corpus
        num_bytes: 7021046
        num_examples: 9251
  - config_name: queries
    features:
      - name: _id
        dtype: string
      - name: text
        dtype: string
    splits:
      - name: queries
        num_bytes: 244634
        num_examples: 2833
configs:
  - config_name: default
    data_files:
      - split: train
        path: qrels/train.jsonl
      - split: dev
        path: qrels/dev.jsonl
  - config_name: corpus
    data_files:
      - split: corpus
        path: corpus.jsonl
  - config_name: queries
    data_files:
      - split: queries
        path: queries.jsonl

---

# Ko-StrategyQA

This dataset represents a conversion of the [Ko-StrategyQA dataset](https://huggingface.co/datasets/NomaDamas/Ko-StrategyQA) into the [BeIR](https://github.com/beir-cellar/beir) format, making it compatible for use with [mteb](https://github.com/embeddings-benchmark/mteb).

The original dataset was designed for multi-hop QA, so we processed the data accordingly. First, we grouped the evidence documents tagged by annotators into sets, and excluded unit questions containing 'no_evidence' or 'operation'.