|
--- |
|
language: |
|
- th |
|
license: apache-2.0 |
|
library_name: transformers |
|
tags: |
|
- pretrained |
|
pipeline_tag: text-generation |
|
model-index: |
|
- name: typhoon-7b |
|
results: |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: AI2 Reasoning Challenge (25-Shot) |
|
type: ai2_arc |
|
config: ARC-Challenge |
|
split: test |
|
args: |
|
num_few_shot: 25 |
|
metrics: |
|
- type: acc_norm |
|
value: 58.53 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: HellaSwag (10-Shot) |
|
type: hellaswag |
|
split: validation |
|
args: |
|
num_few_shot: 10 |
|
metrics: |
|
- type: acc_norm |
|
value: 81.55 |
|
name: normalized accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: MMLU (5-Shot) |
|
type: cais/mmlu |
|
config: all |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 59.54 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: TruthfulQA (0-shot) |
|
type: truthful_qa |
|
config: multiple_choice |
|
split: validation |
|
args: |
|
num_few_shot: 0 |
|
metrics: |
|
- type: mc2 |
|
value: 40.52 |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: Winogrande (5-shot) |
|
type: winogrande |
|
config: winogrande_xl |
|
split: validation |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 76.56 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b |
|
name: Open LLM Leaderboard |
|
- task: |
|
type: text-generation |
|
name: Text Generation |
|
dataset: |
|
name: GSM8k (5-shot) |
|
type: gsm8k |
|
config: main |
|
split: test |
|
args: |
|
num_few_shot: 5 |
|
metrics: |
|
- type: acc |
|
value: 31.61 |
|
name: accuracy |
|
source: |
|
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=scb10x/typhoon-7b |
|
name: Open LLM Leaderboard |
|
--- |
|
# Typhoon-7B: Thai Large Language Model (Pretrained) |
|
|
|
**Typhoon-7B** is a *pretrained* Thai ๐น๐ญ large language model with 7 billion parameters, and it is based on Mistral-7B. |
|
|
|
**Typhoon-7B** outperforms all open-source Thai language models at the time of writing as evaluated on Thai examination benchmarks, and its instruction-tuned variant achieves the best results in instruction-following tasks. Also, its performance in Thai is on par with GPT-3.5 while being 2.62 times more efficient in tokenizing Thai text. |
|
|
|
**This is not an instruction-tuned model** - It may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses. |
|
|
|
The Instruct model (chat-model) will be released soon. The beta version register is open at https://opentyphoon.ai/. |
|
|
|
<div align="center"> |
|
<img src="https://storage.googleapis.com/scb10x-ai-lab-public/assets/typhoon_benchmark.png" alt="Typhoon benchmark" width="100%" style="margin-left:'auto' margin-right:'auto' display:'block'"/> |
|
</div> |
|
|
|
For full details of this model, please read our [paper](https://arxiv.org/abs/2312.13951). |
|
|
|
|
|
## Model Description |
|
- **Model type**: A 7B pretrained decoder-only model |
|
- **Requirement**: transformers 4.34.0 or newer. |
|
- **Primary Language(s)**: Thai ๐น๐ญ and English ๐ฌ๐ง |
|
- **License**: Apache-2.0 (Commercial) |
|
|
|
## Performance on Thai Benchmark |
|
|
|
| **Model** | **ONET** | **IC** | **TGAT** | **TPAT-1** | **A-Level** | |
|
|---------------------|----------|--------|----------|------------|-------------| |
|
| Typhoon-7B | 0.379 | 0.393 | 0.700 | 0.414 | 0.324 | |
|
| SeaLLM-7B | 0.342 | 0.256 | 0.589 | 0.336 | 0.305 | |
|
| OpenThaiGPT-beta-7B | 0.180 | 0.278 | 0.411 | 0.319 | 0.243 | |
|
| WangChanGLM | 0.192 | 0.271 | 0.167 | 0.172 | 0.175 | |
|
| SEA-LION-7B | 0.179 | 0.290 | 0.244 | 0.198 | 0.175 | |
|
| Avg. Human | 0.318 | - | 0.472 | 0.406 | - | |
|
|
|
## Intended Uses & Limitations |
|
|
|
This model is a pretrained base model. Thus, it may not be able to follow human instructions without using one/few-shot learning or instruction fine-tuning. The model does not have any moderation mechanisms, and may generate harmful or inappropriate responses. |
|
|
|
|
|
## SCB10X AI Team |
|
|
|
- Kunat Pipatanakul, Phatrasek Jirabovonvisut, Potsawee Manakul, Sittipong Sripaisarnmongkol, Ruangsak Patomwong, Pathomporn Chokchainant, Kasima Tharnpipitchai |
|
- If you find Typhoon-7B useful for your work, please cite it using: |
|
``` |
|
@article{pipatanakul2023typhoon, |
|
title={Typhoon: Thai Large Language Models}, |
|
author={Kunat Pipatanakul and Phatrasek Jirabovonvisut and Potsawee Manakul and Sittipong Sripaisarnmongkol and Ruangsak Patomwong and Pathomporn Chokchainant and Kasima Tharnpipitchai}, |
|
year={2023}, |
|
journal={arXiv preprint arXiv:2312.13951}, |
|
url={https://arxiv.org/abs/2312.13951} |
|
} |
|
``` |
|
|
|
## Contact Us |
|
|
|
- General & Collaboration: [email protected], [email protected] |
|
- Technical: [email protected] |
|
|
|
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) |
|
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_scb10x__typhoon-7b) |
|
|
|
| Metric |Value| |
|
|---------------------------------|----:| |
|
|Avg. |58.05| |
|
|AI2 Reasoning Challenge (25-Shot)|58.53| |
|
|HellaSwag (10-Shot) |81.55| |
|
|MMLU (5-Shot) |59.54| |
|
|TruthfulQA (0-shot) |40.52| |
|
|Winogrande (5-shot) |76.56| |
|
|GSM8k (5-shot) |31.61| |
|
|
|
|