File size: 6,518 Bytes
d376499
 
 
14e2127
d376499
 
 
14e2127
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
800a6a4
cf66261
800a6a4
64c173c
800a6a4
64c173c
800a6a4
ae11b3a
 
8014e14
87c426e
 
64c173c
87c426e
800a6a4
4076540
800a6a4
 
64c173c
 
 
 
 
87c426e
64c173c
87c426e
64c173c
 
 
 
 
 
 
 
87c426e
64c173c
800a6a4
64c173c
800a6a4
 
 
 
64c173c
 
 
036be57
 
 
 
c27e91d
 
036be57
64c173c
f8a8c01
 
 
8014e14
 
14e2127
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
language:
- th
license: apache-2.0
library_name: transformers
tags:
- pretrained
pipeline_tag: text-generation
model-index:
- name: typhoon-7b
  results:
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: AI2 Reasoning Challenge (25-Shot)
      type: ai2_arc
      config: ARC-Challenge
      split: test
      args:
        num_few_shot: 25
    metrics:
    - type: acc_norm
      value: 58.53
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: HellaSwag (10-Shot)
      type: hellaswag
      split: validation
      args:
        num_few_shot: 10
    metrics:
    - type: acc_norm
      value: 81.55
      name: normalized accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: MMLU (5-Shot)
      type: cais/mmlu
      config: all
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 59.54
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: TruthfulQA (0-shot)
      type: truthful_qa
      config: multiple_choice
      split: validation
      args:
        num_few_shot: 0
    metrics:
    - type: mc2
      value: 40.52
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: Winogrande (5-shot)
      type: winogrande
      config: winogrande_xl
      split: validation
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 76.56
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b
      name: Open LLM Leaderboard
  - task:
      type: text-generation
      name: Text Generation
    dataset:
      name: GSM8k (5-shot)
      type: gsm8k
      config: main
      split: test
      args:
        num_few_shot: 5
    metrics:
    - type: acc
      value: 31.61
      name: accuracy
    source:
      url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b
      name: Open LLM Leaderboard
---
# Typhoon-7B: Thai Large Language Model (Pretrained)

**Typhoon-7B** is a *pretrained* Thai 🇹🇭 large language model with 7 billion parameters, and it is based on Mistral-7B.

**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text.

**This is not an instruction-tuned model** - It may not be able to follow  human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses. 

The Instruct model (chat-model) will be released soon. The beta version register is open at https://opentyphoon.ai/.

<div align="center">
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</div>

For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951).


## Model Description
- **Model type**: A 7B pretrained decoder-only model
- **Requirement**: transformers 4.34.0 or newer.
- **Primary Language(s)**: Thai 🇹🇭 and English 🇬🇧
- **License**: Apache-2.0 (Commercial)

## Performance on Thai Benchmark

| **Model**           | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** |
|---------------------|----------|--------|----------|------------|-------------|
| Typhoon-7B          | 0.379    | 0.393  | 0.700    | 0.414      | 0.324       |
| SeaLLM-7B           | 0.342    | 0.256  | 0.589    | 0.336      | 0.305       |
| OpenThaiGPT-beta-7B | 0.180    | 0.278  | 0.411    | 0.319      | 0.243       |
| WangChanGLM         | 0.192    | 0.271  | 0.167    | 0.172      | 0.175       |
| SEA-LION-7B         | 0.179    | 0.290  | 0.244    | 0.198      | 0.175       |
| Avg. Human          | 0.318    | -      | 0.472    | 0.406      | -           |

## Intended Uses & Limitations

This model is a pretrained base model. Thus, it may not be able to follow  human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses.


## SCB10X AI Team
 
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai
- If you find Typhoon-7B useful for your work, please cite it using:
```
@article{pipatanakul2023typhoon,
    title={Typhoon: Thai Large Language Models}, 
    author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai},
    year={2023},
    journal={arXiv preprint arXiv:2312.13951},
    url={https://arxiv.org/abs/2312.13951}
}
```

## Contact Us

- General & Collaboration: [email protected], [email protected]
- Technical: [email protected]

# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_scb10x__typhoon-7b)

|             Metric              |Value|
|---------------------------------|----:|
|Avg.                             |58.05|
|AI2 Reasoning Challenge (25-Shot)|58.53|
|HellaSwag (10-Shot)              |81.55|
|MMLU (5-Shot)                    |59.54|
|TruthfulQA (0-shot)              |40.52|
|Winogrande (5-shot)              |76.56|
|GSM8k (5-shot)                   |31.61|