Datasets:

Modalities:
Text
Formats:
json
Languages:
Thai
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:
patomp commited on
Commit
80f8fec
·
verified ·
1 Parent(s): 2851e5c

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +93 -0
README.md ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - th
7
+ size_categories:
8
+ - n<1K
9
+ ---
10
+
11
+ # MT-Bench Thai
12
+
13
+ MT-Bench Thai is a dataset for multi-turn benchmarking that covers 9 categories.
14
+
15
+ 1. Writing
16
+ 2. Roleplay
17
+ 3. Extraction
18
+ 4. Reasoning
19
+ 5. Math
20
+ 6. Coding
21
+ 7. STEM
22
+ 8. Social Science
23
+ 9. Knowledge III
24
+
25
+ We introduce the final category, Knowledge III, which evaluates understanding of Thai cultural context.
26
+
27
+ ## Dataset Loading
28
+
29
+ ```python
30
+ from datasets import load_dataset
31
+ ds = load_dataset("ThaiLLM-Leaderboard/mt-bench-thai")
32
+ print(ds)
33
+ ```
34
+
35
+ output
36
+
37
+ ```python
38
+ DatasetDict({
39
+ train: Dataset({
40
+ features: ['question_id', 'turns', 'reference', 'generation_kwargs', 'category'],
41
+ num_rows: 91
42
+ })
43
+ })
44
+ ```
45
+
46
+ A sample
47
+ ```python
48
+ ds["train"][0]
49
+ ```
50
+
51
+ output
52
+ ```json
53
+ {
54
+ "question_id": 0,
55
+ "turns": [
56
+ "จงเติมบทประพันธ์จากสุทรภู่นี้ให้ครบ “แล้วสอนว่าอย่าไว้ใจมนุษย์”",
57
+ "บทประพันธ์นี้มาจากเรื่องใด"
58
+ ],
59
+ "reference": [
60
+ "“แล้วสอนว่าอย่าไว้ใจมนุษย์ \nมันแสนสุดลึกล้ำเหลือกำหนด\nถึงเถาวัลย์พันเกี่ยวที่เลี้ยวลด \nก็ไม่คดเหมือนหนึ่งในน้ำใจคน”",
61
+ "พระอภัยมณี"
62
+ ],
63
+ "generation_kwargs": {
64
+ "temperature": 0.1
65
+ },
66
+ "category": "Knowledge III"
67
+ }
68
+ ```
69
+
70
+ ## Dataset Construction
71
+
72
+ **Guideline**
73
+
74
+ 1. One annotator will be assigned to one category at a time.
75
+ 2. Based on [1], a sample can be translated from English to Thai unless it meets one of the following conditions: (1) The sample contains localization-specific content or entities well-understood by native speakers, e.g., a roleplay involving Elon Musk; or (2) The question is too complex, such as one involving advanced quantum mechanics.
76
+ 3. If a sample meets either condition, we will rewrite the questions on similar topics to localize or simplify them.
77
+
78
+ **Constrains**
79
+
80
+ 1. The question can be understood for a Thai-native high school student in STEM.
81
+ 2. The length of the questions should follow the number of sentences from the MT-Bench source, and it should not differ significantly.
82
+ 3. Annotators should also provide a response for both turns in the case when the question is closed-end.
83
+ 4. Questions that introduce demographic biases, subjective beliefs, or ethical concerns are unacceptable.
84
+
85
+ **Annotators:** Patomporn Payoungkhamdee, Peerat Limkonchotiwat, Wannaphong Phatthiyaphaibun, Surapon Nonesung, Chalermpun Mai-On, Lalita Lowphansirikul, and Parinthapat Pengpun.
86
+
87
+ ## Acknowledgement
88
+
89
+ We would like to thank the WangchanX project for providing resource support. We greatly appreciate Wei Qi Leong from AI Singapore for his valuable advice and review. Lastly, we would like to thank SCB10X for the leadboard hosting.
90
+
91
+ ## Citation
92
+
93
+ [1] Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric P. Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. 2023. Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena. Retrieved from https://arxiv.org/abs/2306.05685