File size: 5,856 Bytes
34faf87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5231fd5
491cfdc
1b6814a
491cfdc
34faf87
 
dfab73c
34faf87
 
 
 
 
 
 
 
 
 
671254b
34faf87
 
 
 
 
 
 
 
 
 
3362f78
34faf87
 
 
3362f78
34faf87
 
 
671254b
34faf87
 
 
 
 
 
 
 
 
 
 
 
671254b
 
34faf87
 
 
 
 
 
3362f78
671254b
a42f499
 
 
671254b
a42f499
34faf87
b90aafb
a1d5e87
 
 
 
 
 
 
6aed49a
a1d5e87
b90aafb
 
a1d5e87
40304d4
a1d5e87
 
b90aafb
 
a1d5e87
 
 
b90aafb
34faf87
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
---
license: apache-2.0
language:
- th
- zh
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-7B
pipeline_tag: text-generation
tags:
- chemistry
- biology
- finance
- legal
- code
- medical
- text-generation-inference
---
![](https://miro.medium.com/v2/resize:fit:4800/format:webp/1*LEONPJfI4dNGCAJ26eAMPA.png)
# **PathummaLLM-text-1.0.0-7B: Thai & China & English Large Language Model Instruct**
**PathummaLLM-text-1.0.0-7B** is a Thai 🇹🇭 & China 🇨🇳 & English 🇬🇧 large language model with 7 billion parameters, and it is Instruction finetune based on OpenThaiLLM-Prebuilt.
It demonstrates competitive performance with Openthaigpt1.5-7b-instruct, and its optimized for application use cases, Retrieval-Augmented Generation (RAG), 
constrained generation, and reasoning tasks.

For release notes, please see our [blog](https://medium.com/@superkingbasskb/pathummallm-v-1-0-0-release-6a098ddfe276).

## **Requirements**
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.

With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## **Support Community**

**https://discord.gg/3WJwJjZt7r**

## **Implementation**

Here is a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate content.

```python
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained(
    "nectec/Pathumma-llm-text-1.0.0",
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("nectec/Pathumma-llm-text-1.0.0")

prompt = "บริษัท A มีต้นทุนคงที่ 100,000 บาท และต้นทุนผันแปรต่อหน่วย 50 บาท ขายสินค้าได้ในราคา 150 บาทต่อหน่วย ต้องขายสินค้าอย่างน้อยกี่หน่วยเพื่อให้ถึงจุดคุ้มทุน?"
messages = [
    {"role": "system", "content": "You are Pathumma LLM, created by NECTEC. Your are a helpful assistant."},
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    model_inputs.input_ids,
    max_new_tokens=4096,
    repetition_penalty=1.1,
    temperature = 0.4
)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

## **Evaluation Performance**
| Model | m3exam | thaiexam | xcopa | belebele | xnli | thaisentiment | XL sum | flores200 eng > th | flores200 th > eng | iapp | AVG(NLU) | AVG(MC) | AVG(NLG) |
| :--- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
| Pathumma-llm-text-1.0.0 | **55.02** | 51.32 | 83 | 77.77 | **40.11** | 41.29 | 16.9286253 | 26.54 | **51.88** | 41.28 | 60.54 | **53.17** | 34.16 |
| Openthaigpt1.5-7b-instruct | 54.01 | **52.04** | **85.4** | **79.44** | 39.7 | **50.24** | 18.11 | 29.09 | 29.58 | 32.49 | **63.70** | 53.03 | 27.32 |
| SeaLLMs-v3-7B-Chat | 51.43 | 51.33 | 83.4 | 78.22 | 34.05 | 39.57 | 20.27 | **32.91** | 28.8 | 48.12 | 58.81 | 51.38 | 32.53 |
| llama-3-typhoon-v1.5-8B | 43.82 | 41.95 | 81.6 | 71.89 | 33.35 | 38.45 | 16.66 | 31.94 | 28.86 | 54.78 | 56.32 | 42.89 | 33.06 |
| Meta-Llama-3.1-8B-Instruct | 45.11 | 43.89 | 73.4 | 74.89 | 33.49 | 45.45 | **21.61** | 30.45 | 32.28 | **68.57** | 56.81 | 44.50 | **38.23** |

## **Contributor Contract**

**LLM Team**  
Pakawat Phasook ([email protected])<br>
Jessada Pranee ([email protected])<br>
Arnon Saeoung ([email protected])<br>
Kun Kerdthaisong ([email protected])<br>
Kittisak Sukhantharat ([email protected])<br>
Piyawat Chuangkrud ([email protected])<br>
Chaianun Damrongrat ([email protected])<br>
Sarawoot Kongyoung ([email protected])

**Audio Team**  
Pattara Tipaksorn ([email protected])<br>
Wayupuk Sommuang ([email protected])<br>
Oatsada Chatthong ([email protected])<br>
Kwanchiva Thangthai ([email protected])

**Vision Team**  
Thirawarit Pitiphiphat ([email protected])<br>
Peerapas Ngokpon ([email protected])<br>
Theerasit Issaranon ([email protected])

## **Citation**

If you find our work helpful, feel free to give us a cite.

```
@misc{qwen2.5,
    title = {Qwen2.5: A Party of Foundation Models},
    url = {https://qwenlm.github.io/blog/qwen2.5/},
    author = {Qwen Team},
    month = {September},
    year = {2024}
}

@article{qwen2,
      title={Qwen2 Technical Report}, 
      author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
      journal={arXiv preprint arXiv:2407.10671},
      year={2024}
}